I0324 12:55:49.014298 6 e2e.go:243] Starting e2e run "78a0d9b2-261e-486f-9022-e913d92833b7" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585054548 - Will randomize all specs Will run 215 of 4412 specs Mar 24 12:55:49.196: INFO: >>> kubeConfig: /root/.kube/config Mar 24 12:55:49.201: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 24 12:55:49.221: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 24 12:55:49.253: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 24 12:55:49.253: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 24 12:55:49.253: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 24 12:55:49.261: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 24 12:55:49.261: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 24 12:55:49.261: INFO: e2e test version: v1.15.10 Mar 24 12:55:49.263: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:55:49.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Mar 24 12:55:49.338: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 12:55:49.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1922' Mar 24 12:55:51.959: INFO: stderr: "" Mar 24 12:55:51.959: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 24 12:55:51.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1922' Mar 24 12:55:52.350: INFO: stderr: "" Mar 24 12:55:52.350: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 24 12:55:53.394: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:55:53.394: INFO: Found 0 / 1 Mar 24 12:55:54.355: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:55:54.355: INFO: Found 0 / 1 Mar 24 12:55:55.362: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:55:55.362: INFO: Found 1 / 1 Mar 24 12:55:55.363: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 24 12:55:55.405: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:55:55.405: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 24 12:55:55.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-z4t2v --namespace=kubectl-1922' Mar 24 12:55:55.512: INFO: stderr: "" Mar 24 12:55:55.512: INFO: stdout: "Name: redis-master-z4t2v\nNamespace: kubectl-1922\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 24 Mar 2020 12:55:52 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.127\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://90cc469dfdccdbb8d4250553705bc3e8c2afe1b647e00e502db6f0250d25bb06\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 24 Mar 2020 12:55:54 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bmn8k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bmn8k:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bmn8k\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1922/redis-master-z4t2v to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Mar 24 12:55:55.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1922' Mar 24 12:55:55.636: INFO: stderr: "" Mar 24 12:55:55.636: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1922\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-z4t2v\n" Mar 24 12:55:55.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1922' Mar 24 12:55:55.745: INFO: stderr: "" Mar 24 12:55:55.745: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1922\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.71.232\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.127:6379\nSession Affinity: None\nEvents: \n" Mar 24 12:55:55.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 24 12:55:55.868: INFO: stderr: "" Mar 24 12:55:55.868: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 24 Mar 2020 12:55:29 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 24 Mar 2020 12:55:29 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 24 Mar 2020 12:55:29 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 24 Mar 2020 12:55:29 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 8d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 24 12:55:55.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1922' Mar 24 12:55:55.971: INFO: stderr: "" Mar 24 12:55:55.971: INFO: stdout: "Name: kubectl-1922\nLabels: e2e-framework=kubectl\n e2e-run=78a0d9b2-261e-486f-9022-e913d92833b7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:55:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1922" for this suite. Mar 24 12:56:17.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:56:18.093: INFO: namespace kubectl-1922 deletion completed in 22.118054235s • [SLOW TEST:28.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:56:18.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9a186940-6e96-43e7-80c1-7cc11d0befd0 STEP: Creating a pod to test consume configMaps Mar 24 12:56:18.157: INFO: Waiting up to 5m0s for pod "pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185" in namespace "configmap-1865" to be "success or failure" Mar 24 12:56:18.172: INFO: Pod "pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185": Phase="Pending", Reason="", readiness=false. Elapsed: 15.053672ms Mar 24 12:56:20.175: INFO: Pod "pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018565362s Mar 24 12:56:22.179: INFO: Pod "pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02272884s STEP: Saw pod success Mar 24 12:56:22.179: INFO: Pod "pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185" satisfied condition "success or failure" Mar 24 12:56:22.183: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185 container configmap-volume-test: STEP: delete the pod Mar 24 12:56:22.215: INFO: Waiting for pod pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185 to disappear Mar 24 12:56:22.227: INFO: Pod pod-configmaps-30b97990-be47-4c8d-8979-19ca2e85d185 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:56:22.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1865" for this suite. Mar 24 12:56:28.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:56:28.316: INFO: namespace configmap-1865 deletion completed in 6.086013274s • [SLOW TEST:10.223 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:56:28.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 24 12:56:28.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4282' Mar 24 12:56:28.595: INFO: stderr: "" Mar 24 12:56:28.595: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 24 12:56:29.599: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:29.599: INFO: Found 0 / 1 Mar 24 12:56:30.598: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:30.598: INFO: Found 0 / 1 Mar 24 12:56:31.599: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:31.599: INFO: Found 0 / 1 Mar 24 12:56:32.599: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:32.599: INFO: Found 1 / 1 Mar 24 12:56:32.599: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 24 12:56:32.602: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:32.602: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 24 12:56:32.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2rz7g --namespace=kubectl-4282 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 24 12:56:32.706: INFO: stderr: "" Mar 24 12:56:32.706: INFO: stdout: "pod/redis-master-2rz7g patched\n" STEP: checking annotations Mar 24 12:56:32.710: INFO: Selector matched 1 pods for map[app:redis] Mar 24 12:56:32.710: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:56:32.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4282" for this suite. Mar 24 12:56:54.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:56:54.830: INFO: namespace kubectl-4282 deletion completed in 22.116901792s • [SLOW TEST:26.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:56:54.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 24 12:56:54.917: INFO: Waiting up to 5m0s for pod "downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f" in namespace "downward-api-1646" to be "success or failure" Mar 24 12:56:54.926: INFO: Pod "downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.827868ms Mar 24 12:56:56.930: INFO: Pod "downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013102749s Mar 24 12:56:58.934: INFO: Pod "downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017134457s STEP: Saw pod success Mar 24 12:56:58.935: INFO: Pod "downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f" satisfied condition "success or failure" Mar 24 12:56:58.938: INFO: Trying to get logs from node iruya-worker pod downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f container dapi-container: STEP: delete the pod Mar 24 12:56:58.974: INFO: Waiting for pod downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f to disappear Mar 24 12:56:58.982: INFO: Pod downward-api-741ca3d2-fada-46e6-a3e6-302ad967346f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:56:58.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1646" for this suite. Mar 24 12:57:04.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:57:05.071: INFO: namespace downward-api-1646 deletion completed in 6.085475242s • [SLOW TEST:10.240 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:57:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 24 12:57:05.155: INFO: Waiting up to 5m0s for pod "var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c" in namespace "var-expansion-9518" to be "success or failure" Mar 24 12:57:05.161: INFO: Pod "var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675909ms Mar 24 12:57:07.166: INFO: Pod "var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010958224s Mar 24 12:57:09.170: INFO: Pod "var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015477517s STEP: Saw pod success Mar 24 12:57:09.170: INFO: Pod "var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c" satisfied condition "success or failure" Mar 24 12:57:09.173: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c container dapi-container: STEP: delete the pod Mar 24 12:57:09.193: INFO: Waiting for pod var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c to disappear Mar 24 12:57:09.197: INFO: Pod var-expansion-81c2ab07-4244-4356-b6a0-8e8d4aff187c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:57:09.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9518" for this suite. Mar 24 12:57:15.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:57:15.290: INFO: namespace var-expansion-9518 deletion completed in 6.089686338s • [SLOW TEST:10.218 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:57:15.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 24 12:57:15.378: INFO: Waiting up to 5m0s for pod "pod-04acb107-1e11-4786-9f2b-33384f831f61" in namespace "emptydir-429" to be "success or failure" Mar 24 12:57:15.383: INFO: Pod "pod-04acb107-1e11-4786-9f2b-33384f831f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653377ms Mar 24 12:57:17.387: INFO: Pod "pod-04acb107-1e11-4786-9f2b-33384f831f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008775936s Mar 24 12:57:19.392: INFO: Pod "pod-04acb107-1e11-4786-9f2b-33384f831f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013109s STEP: Saw pod success Mar 24 12:57:19.392: INFO: Pod "pod-04acb107-1e11-4786-9f2b-33384f831f61" satisfied condition "success or failure" Mar 24 12:57:19.395: INFO: Trying to get logs from node iruya-worker pod pod-04acb107-1e11-4786-9f2b-33384f831f61 container test-container: STEP: delete the pod Mar 24 12:57:19.412: INFO: Waiting for pod pod-04acb107-1e11-4786-9f2b-33384f831f61 to disappear Mar 24 12:57:19.419: INFO: Pod pod-04acb107-1e11-4786-9f2b-33384f831f61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:57:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-429" for this suite. Mar 24 12:57:25.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:57:25.507: INFO: namespace emptydir-429 deletion completed in 6.084776784s • [SLOW TEST:10.217 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:57:25.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-d9af05ef-5bb0-46a0-9277-cefe770ce1cf STEP: Creating a pod to test consume secrets Mar 24 12:57:25.603: INFO: Waiting up to 5m0s for pod "pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970" in namespace "secrets-2464" to be "success or failure" Mar 24 12:57:25.621: INFO: Pod "pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970": Phase="Pending", Reason="", readiness=false. Elapsed: 18.431438ms Mar 24 12:57:27.625: INFO: Pod "pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022805117s Mar 24 12:57:29.630: INFO: Pod "pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027474552s STEP: Saw pod success Mar 24 12:57:29.630: INFO: Pod "pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970" satisfied condition "success or failure" Mar 24 12:57:29.633: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970 container secret-volume-test: STEP: delete the pod Mar 24 12:57:29.667: INFO: Waiting for pod pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970 to disappear Mar 24 12:57:29.681: INFO: Pod pod-secrets-f1c501cc-3cc3-4c89-b69a-63bc1bd08970 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:57:29.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2464" for this suite. Mar 24 12:57:35.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:57:35.790: INFO: namespace secrets-2464 deletion completed in 6.105768529s • [SLOW TEST:10.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:57:35.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:58:35.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8433" for this suite. Mar 24 12:58:57.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:58:57.983: INFO: namespace container-probe-8433 deletion completed in 22.105945317s • [SLOW TEST:82.192 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:58:57.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 24 12:58:58.044: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:03.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8486" for this suite. Mar 24 12:59:09.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:59:09.182: INFO: namespace init-container-8486 deletion completed in 6.091122587s • [SLOW TEST:11.198 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:59:09.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 12:59:09.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af" in namespace "downward-api-2576" to be "success or failure" Mar 24 12:59:09.290: INFO: Pod "downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af": Phase="Pending", Reason="", readiness=false. Elapsed: 22.68525ms Mar 24 12:59:11.294: INFO: Pod "downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026807471s Mar 24 12:59:13.299: INFO: Pod "downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031872815s STEP: Saw pod success Mar 24 12:59:13.299: INFO: Pod "downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af" satisfied condition "success or failure" Mar 24 12:59:13.302: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af container client-container: STEP: delete the pod Mar 24 12:59:13.330: INFO: Waiting for pod downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af to disappear Mar 24 12:59:13.354: INFO: Pod downwardapi-volume-23fe9890-d842-4bbf-9332-0106a32bc6af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:13.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2576" for this suite. Mar 24 12:59:19.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:59:19.452: INFO: namespace downward-api-2576 deletion completed in 6.094596945s • [SLOW TEST:10.270 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:59:19.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 24 12:59:23.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-df282635-2e97-4391-b7e1-e4ffb62f898e -c busybox-main-container --namespace=emptydir-9514 -- cat /usr/share/volumeshare/shareddata.txt' Mar 24 12:59:23.751: INFO: stderr: "I0324 12:59:23.668081 231 log.go:172] (0xc0006eea50) (0xc000532820) Create stream\nI0324 12:59:23.668148 231 log.go:172] (0xc0006eea50) (0xc000532820) Stream added, broadcasting: 1\nI0324 12:59:23.672148 231 log.go:172] (0xc0006eea50) Reply frame received for 1\nI0324 12:59:23.672193 231 log.go:172] (0xc0006eea50) (0xc000532000) Create stream\nI0324 12:59:23.672203 231 log.go:172] (0xc0006eea50) (0xc000532000) Stream added, broadcasting: 3\nI0324 12:59:23.673263 231 log.go:172] (0xc0006eea50) Reply frame received for 3\nI0324 12:59:23.673331 231 log.go:172] (0xc0006eea50) (0xc0004103c0) Create stream\nI0324 12:59:23.673444 231 log.go:172] (0xc0006eea50) (0xc0004103c0) Stream added, broadcasting: 5\nI0324 12:59:23.674400 231 log.go:172] (0xc0006eea50) Reply frame received for 5\nI0324 12:59:23.744287 231 log.go:172] (0xc0006eea50) Data frame received for 3\nI0324 12:59:23.744366 231 log.go:172] (0xc000532000) (3) Data frame handling\nI0324 12:59:23.744392 231 log.go:172] (0xc000532000) (3) Data frame sent\nI0324 12:59:23.744411 231 log.go:172] (0xc0006eea50) Data frame received for 3\nI0324 12:59:23.744450 231 log.go:172] (0xc000532000) (3) Data frame handling\nI0324 12:59:23.744481 231 log.go:172] (0xc0006eea50) Data frame received for 5\nI0324 12:59:23.744518 231 log.go:172] (0xc0004103c0) (5) Data frame handling\nI0324 12:59:23.747330 231 log.go:172] (0xc0006eea50) Data frame received for 1\nI0324 12:59:23.747372 231 log.go:172] (0xc000532820) (1) Data frame handling\nI0324 12:59:23.747386 231 log.go:172] (0xc000532820) (1) Data frame sent\nI0324 12:59:23.747401 231 log.go:172] (0xc0006eea50) (0xc000532820) Stream removed, broadcasting: 1\nI0324 12:59:23.747421 231 log.go:172] (0xc0006eea50) Go away received\nI0324 12:59:23.747764 231 log.go:172] (0xc0006eea50) (0xc000532820) Stream removed, broadcasting: 1\nI0324 12:59:23.747797 231 log.go:172] (0xc0006eea50) (0xc000532000) Stream removed, broadcasting: 3\nI0324 12:59:23.747815 231 log.go:172] (0xc0006eea50) (0xc0004103c0) Stream removed, broadcasting: 5\n" Mar 24 12:59:23.752: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:23.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9514" for this suite. Mar 24 12:59:29.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:59:29.870: INFO: namespace emptydir-9514 deletion completed in 6.114217828s • [SLOW TEST:10.417 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:59:29.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 24 12:59:29.938: INFO: Waiting up to 5m0s for pod "pod-a0a1a817-953a-4f94-82c8-05436a6c1e48" in namespace "emptydir-7621" to be "success or failure" Mar 24 12:59:29.942: INFO: Pod "pod-a0a1a817-953a-4f94-82c8-05436a6c1e48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.596535ms Mar 24 12:59:31.946: INFO: Pod "pod-a0a1a817-953a-4f94-82c8-05436a6c1e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007580413s Mar 24 12:59:33.950: INFO: Pod "pod-a0a1a817-953a-4f94-82c8-05436a6c1e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011594661s STEP: Saw pod success Mar 24 12:59:33.950: INFO: Pod "pod-a0a1a817-953a-4f94-82c8-05436a6c1e48" satisfied condition "success or failure" Mar 24 12:59:33.953: INFO: Trying to get logs from node iruya-worker pod pod-a0a1a817-953a-4f94-82c8-05436a6c1e48 container test-container: STEP: delete the pod Mar 24 12:59:33.989: INFO: Waiting for pod pod-a0a1a817-953a-4f94-82c8-05436a6c1e48 to disappear Mar 24 12:59:34.002: INFO: Pod pod-a0a1a817-953a-4f94-82c8-05436a6c1e48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:34.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7621" for this suite. Mar 24 12:59:40.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:59:40.098: INFO: namespace emptydir-7621 deletion completed in 6.09246609s • [SLOW TEST:10.228 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:59:40.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 12:59:40.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c" in namespace "projected-8715" to be "success or failure" Mar 24 12:59:40.206: INFO: Pod "downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.88872ms Mar 24 12:59:42.210: INFO: Pod "downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016703224s Mar 24 12:59:44.214: INFO: Pod "downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021335955s STEP: Saw pod success Mar 24 12:59:44.214: INFO: Pod "downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c" satisfied condition "success or failure" Mar 24 12:59:44.217: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c container client-container: STEP: delete the pod Mar 24 12:59:44.236: INFO: Waiting for pod downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c to disappear Mar 24 12:59:44.241: INFO: Pod downwardapi-volume-d86801f5-23de-4f8c-9943-e27de728a83c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:44.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8715" for this suite. Mar 24 12:59:50.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 12:59:50.380: INFO: namespace projected-8715 deletion completed in 6.127916438s • [SLOW TEST:10.282 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 12:59:50.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 24 12:59:50.444: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 24 12:59:55.448: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 12:59:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7452" for this suite. Mar 24 13:00:02.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:00:02.560: INFO: namespace replication-controller-7452 deletion completed in 6.090932231s • [SLOW TEST:12.180 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:00:02.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-2101/secret-test-dbbc20ea-d5ed-4866-a4ef-61c4720d3878 STEP: Creating a pod to test consume secrets Mar 24 13:00:02.710: INFO: Waiting up to 5m0s for pod "pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073" in namespace "secrets-2101" to be "success or failure" Mar 24 13:00:02.729: INFO: Pod "pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073": Phase="Pending", Reason="", readiness=false. Elapsed: 18.721442ms Mar 24 13:00:04.792: INFO: Pod "pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081264236s Mar 24 13:00:06.797: INFO: Pod "pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08651649s STEP: Saw pod success Mar 24 13:00:06.797: INFO: Pod "pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073" satisfied condition "success or failure" Mar 24 13:00:06.800: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073 container env-test: STEP: delete the pod Mar 24 13:00:06.831: INFO: Waiting for pod pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073 to disappear Mar 24 13:00:06.834: INFO: Pod pod-configmaps-d00394b0-4890-4805-a3e0-e3587d2c7073 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:00:06.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2101" for this suite. Mar 24 13:00:12.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:00:12.946: INFO: namespace secrets-2101 deletion completed in 6.107498077s • [SLOW TEST:10.385 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:00:12.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 24 13:00:13.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:13.054: INFO: Number of nodes with available pods: 0 Mar 24 13:00:13.054: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:00:14.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:14.061: INFO: Number of nodes with available pods: 0 Mar 24 13:00:14.061: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:00:15.080: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:15.084: INFO: Number of nodes with available pods: 0 Mar 24 13:00:15.084: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:00:16.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:16.062: INFO: Number of nodes with available pods: 0 Mar 24 13:00:16.062: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:00:17.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:17.062: INFO: Number of nodes with available pods: 2 Mar 24 13:00:17.062: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 24 13:00:17.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:00:17.104: INFO: Number of nodes with available pods: 2 Mar 24 13:00:17.104: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5325, will wait for the garbage collector to delete the pods Mar 24 13:00:18.188: INFO: Deleting DaemonSet.extensions daemon-set took: 5.806094ms Mar 24 13:00:18.288: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.252701ms Mar 24 13:00:32.191: INFO: Number of nodes with available pods: 0 Mar 24 13:00:32.191: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 13:00:32.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5325/daemonsets","resourceVersion":"1592359"},"items":null} Mar 24 13:00:32.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5325/pods","resourceVersion":"1592359"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:00:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5325" for this suite. Mar 24 13:00:38.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:00:38.297: INFO: namespace daemonsets-5325 deletion completed in 6.083843009s • [SLOW TEST:25.348 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:00:38.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 24 13:00:38.382: INFO: Waiting up to 5m0s for pod "client-containers-94715715-5c6f-4363-aa17-a277ad08ed13" in namespace "containers-5815" to be "success or failure" Mar 24 13:00:38.396: INFO: Pod "client-containers-94715715-5c6f-4363-aa17-a277ad08ed13": Phase="Pending", Reason="", readiness=false. Elapsed: 13.844348ms Mar 24 13:00:40.400: INFO: Pod "client-containers-94715715-5c6f-4363-aa17-a277ad08ed13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018186265s Mar 24 13:00:42.404: INFO: Pod "client-containers-94715715-5c6f-4363-aa17-a277ad08ed13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022259836s STEP: Saw pod success Mar 24 13:00:42.404: INFO: Pod "client-containers-94715715-5c6f-4363-aa17-a277ad08ed13" satisfied condition "success or failure" Mar 24 13:00:42.407: INFO: Trying to get logs from node iruya-worker pod client-containers-94715715-5c6f-4363-aa17-a277ad08ed13 container test-container: STEP: delete the pod Mar 24 13:00:42.424: INFO: Waiting for pod client-containers-94715715-5c6f-4363-aa17-a277ad08ed13 to disappear Mar 24 13:00:42.453: INFO: Pod client-containers-94715715-5c6f-4363-aa17-a277ad08ed13 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:00:42.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5815" for this suite. Mar 24 13:00:48.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:00:48.569: INFO: namespace containers-5815 deletion completed in 6.11206059s • [SLOW TEST:10.271 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:00:48.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 24 13:00:48.616: INFO: Waiting up to 5m0s for pod "var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34" in namespace "var-expansion-3643" to be "success or failure" Mar 24 13:00:48.641: INFO: Pod "var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34": Phase="Pending", Reason="", readiness=false. Elapsed: 24.777317ms Mar 24 13:00:50.645: INFO: Pod "var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028800768s Mar 24 13:00:52.649: INFO: Pod "var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032747003s STEP: Saw pod success Mar 24 13:00:52.649: INFO: Pod "var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34" satisfied condition "success or failure" Mar 24 13:00:52.745: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34 container dapi-container: STEP: delete the pod Mar 24 13:00:52.777: INFO: Waiting for pod var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34 to disappear Mar 24 13:00:52.788: INFO: Pod var-expansion-c67ccfe3-04e1-46f0-ace1-a93437b3af34 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:00:52.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3643" for this suite. Mar 24 13:00:58.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:00:58.883: INFO: namespace var-expansion-3643 deletion completed in 6.091255262s • [SLOW TEST:10.314 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:00:58.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0324 13:01:29.499615 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:01:29.499: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:01:29.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1787" for this suite. Mar 24 13:01:35.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:01:35.589: INFO: namespace gc-1787 deletion completed in 6.086010904s • [SLOW TEST:36.705 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:01:35.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 24 13:01:35.617: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 13:01:35.640: INFO: Waiting for terminating namespaces to be deleted... Mar 24 13:01:35.643: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 24 13:01:35.647: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.647: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:01:35.647: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.647: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:01:35.647: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 24 13:01:35.652: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.652: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:01:35.652: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.652: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:01:35.652: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.652: INFO: Container coredns ready: true, restart count 0 Mar 24 13:01:35.652: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 24 13:01:35.652: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ff3f0795b0434c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:01:36.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6736" for this suite. Mar 24 13:01:42.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:01:42.785: INFO: namespace sched-pred-6736 deletion completed in 6.10535099s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.196 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:01:42.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 24 13:01:42.892: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:01:52.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2391" for this suite. Mar 24 13:01:58.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:01:58.271: INFO: namespace pods-2391 deletion completed in 6.102147384s • [SLOW TEST:15.485 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:01:58.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:01:58.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92" in namespace "downward-api-7448" to be "success or failure" Mar 24 13:01:58.358: INFO: Pod "downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92": Phase="Pending", Reason="", readiness=false. Elapsed: 11.327801ms Mar 24 13:02:00.361: INFO: Pod "downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014966255s Mar 24 13:02:02.366: INFO: Pod "downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019412789s STEP: Saw pod success Mar 24 13:02:02.366: INFO: Pod "downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92" satisfied condition "success or failure" Mar 24 13:02:02.369: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92 container client-container: STEP: delete the pod Mar 24 13:02:02.404: INFO: Waiting for pod downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92 to disappear Mar 24 13:02:02.464: INFO: Pod downwardapi-volume-64bc48b1-27ae-466b-96ac-9a2c0aa5fa92 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:02:02.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7448" for this suite. Mar 24 13:02:08.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:02:08.574: INFO: namespace downward-api-7448 deletion completed in 6.10613661s • [SLOW TEST:10.303 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:02:08.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 24 13:02:13.185: INFO: Successfully updated pod "labelsupdatefda3dbb1-8829-4318-90cf-141de7f0fd79" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:02:15.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1291" for this suite. Mar 24 13:02:37.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:02:37.323: INFO: namespace downward-api-1291 deletion completed in 22.116956807s • [SLOW TEST:28.748 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:02:37.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d709e9f1-8f2e-448c-b932-4cff7cf660de STEP: Creating a pod to test consume secrets Mar 24 13:02:37.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d" in namespace "projected-9196" to be "success or failure" Mar 24 13:02:37.424: INFO: Pod "pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.788882ms Mar 24 13:02:39.428: INFO: Pod "pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007626955s Mar 24 13:02:41.433: INFO: Pod "pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012184724s STEP: Saw pod success Mar 24 13:02:41.433: INFO: Pod "pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d" satisfied condition "success or failure" Mar 24 13:02:41.436: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d container projected-secret-volume-test: STEP: delete the pod Mar 24 13:02:41.457: INFO: Waiting for pod pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d to disappear Mar 24 13:02:41.460: INFO: Pod pod-projected-secrets-71183e53-72b3-4498-b196-e7c1f164fc0d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:02:41.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9196" for this suite. Mar 24 13:02:47.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:02:47.551: INFO: namespace projected-9196 deletion completed in 6.087553876s • [SLOW TEST:10.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:02:47.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 24 13:02:47.633: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 24 13:02:47.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:47.942: INFO: stderr: "" Mar 24 13:02:47.942: INFO: stdout: "service/redis-slave created\n" Mar 24 13:02:47.943: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 24 13:02:47.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:48.216: INFO: stderr: "" Mar 24 13:02:48.216: INFO: stdout: "service/redis-master created\n" Mar 24 13:02:48.216: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 24 13:02:48.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:48.487: INFO: stderr: "" Mar 24 13:02:48.487: INFO: stdout: "service/frontend created\n" Mar 24 13:02:48.488: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 24 13:02:48.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:48.737: INFO: stderr: "" Mar 24 13:02:48.738: INFO: stdout: "deployment.apps/frontend created\n" Mar 24 13:02:48.738: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 24 13:02:48.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:49.014: INFO: stderr: "" Mar 24 13:02:49.014: INFO: stdout: "deployment.apps/redis-master created\n" Mar 24 13:02:49.015: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 24 13:02:49.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5622' Mar 24 13:02:49.295: INFO: stderr: "" Mar 24 13:02:49.295: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 24 13:02:49.295: INFO: Waiting for all frontend pods to be Running. Mar 24 13:02:59.346: INFO: Waiting for frontend to serve content. Mar 24 13:02:59.364: INFO: Trying to add a new entry to the guestbook. Mar 24 13:02:59.378: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 24 13:02:59.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:02:59.555: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:02:59.555: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 24 13:02:59.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:02:59.689: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:02:59.689: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 24 13:02:59.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:02:59.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:02:59.807: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 24 13:02:59.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:02:59.909: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:02:59.909: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 24 13:02:59.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:03:00.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:03:00.004: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 24 13:03:00.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5622' Mar 24 13:03:00.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:03:00.100: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:03:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5622" for this suite. Mar 24 13:03:38.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:03:38.220: INFO: namespace kubectl-5622 deletion completed in 38.114890971s • [SLOW TEST:50.669 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:03:38.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:03:38.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23" in namespace "downward-api-5943" to be "success or failure" Mar 24 13:03:38.308: INFO: Pod "downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23": Phase="Pending", Reason="", readiness=false. Elapsed: 21.364071ms Mar 24 13:03:40.312: INFO: Pod "downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025522289s Mar 24 13:03:42.316: INFO: Pod "downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030003535s STEP: Saw pod success Mar 24 13:03:42.316: INFO: Pod "downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23" satisfied condition "success or failure" Mar 24 13:03:42.319: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23 container client-container: STEP: delete the pod Mar 24 13:03:42.351: INFO: Waiting for pod downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23 to disappear Mar 24 13:03:42.361: INFO: Pod downwardapi-volume-1fb09354-f45d-45d5-ac82-ffc896518e23 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:03:42.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5943" for this suite. Mar 24 13:03:48.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:03:48.453: INFO: namespace downward-api-5943 deletion completed in 6.088389588s • [SLOW TEST:10.233 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:03:48.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:03:48.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e" in namespace "projected-996" to be "success or failure" Mar 24 13:03:48.555: INFO: Pod "downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.457338ms Mar 24 13:03:50.575: INFO: Pod "downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03128553s Mar 24 13:03:52.580: INFO: Pod "downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036139541s STEP: Saw pod success Mar 24 13:03:52.580: INFO: Pod "downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e" satisfied condition "success or failure" Mar 24 13:03:52.584: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e container client-container: STEP: delete the pod Mar 24 13:03:52.602: INFO: Waiting for pod downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e to disappear Mar 24 13:03:52.649: INFO: Pod downwardapi-volume-eb1bffd6-bdd1-4840-9896-b90c781cfc9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:03:52.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-996" for this suite. Mar 24 13:03:58.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:03:58.752: INFO: namespace projected-996 deletion completed in 6.098767604s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:03:58.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 24 13:03:58.893: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8981,SelfLink:/api/v1/namespaces/watch-8981/configmaps/e2e-watch-test-watch-closed,UID:ed9b16d9-69d4-4028-ace7-9aef96071d60,ResourceVersion:1593242,Generation:0,CreationTimestamp:2020-03-24 13:03:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 24 13:03:58.894: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8981,SelfLink:/api/v1/namespaces/watch-8981/configmaps/e2e-watch-test-watch-closed,UID:ed9b16d9-69d4-4028-ace7-9aef96071d60,ResourceVersion:1593243,Generation:0,CreationTimestamp:2020-03-24 13:03:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 24 13:03:58.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8981,SelfLink:/api/v1/namespaces/watch-8981/configmaps/e2e-watch-test-watch-closed,UID:ed9b16d9-69d4-4028-ace7-9aef96071d60,ResourceVersion:1593244,Generation:0,CreationTimestamp:2020-03-24 13:03:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 24 13:03:58.910: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8981,SelfLink:/api/v1/namespaces/watch-8981/configmaps/e2e-watch-test-watch-closed,UID:ed9b16d9-69d4-4028-ace7-9aef96071d60,ResourceVersion:1593245,Generation:0,CreationTimestamp:2020-03-24 13:03:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:03:58.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8981" for this suite. Mar 24 13:04:04.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:04:05.016: INFO: namespace watch-8981 deletion completed in 6.102353828s • [SLOW TEST:6.264 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:04:05.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:04:05.093: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675" in namespace "projected-1379" to be "success or failure" Mar 24 13:04:05.110: INFO: Pod "downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675": Phase="Pending", Reason="", readiness=false. Elapsed: 16.151048ms Mar 24 13:04:07.114: INFO: Pod "downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020309974s Mar 24 13:04:09.118: INFO: Pod "downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025061203s STEP: Saw pod success Mar 24 13:04:09.119: INFO: Pod "downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675" satisfied condition "success or failure" Mar 24 13:04:09.122: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675 container client-container: STEP: delete the pod Mar 24 13:04:09.153: INFO: Waiting for pod downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675 to disappear Mar 24 13:04:09.166: INFO: Pod downwardapi-volume-ea8c5686-72a5-4128-8f43-9e6cf2c03675 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:04:09.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1379" for this suite. Mar 24 13:04:15.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:04:15.259: INFO: namespace projected-1379 deletion completed in 6.090637978s • [SLOW TEST:10.242 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:04:15.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 13:04:15.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-542' Mar 24 13:04:15.463: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 24 13:04:15.463: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 24 13:04:17.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-542' Mar 24 13:04:17.654: INFO: stderr: "" Mar 24 13:04:17.654: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:04:17.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-542" for this suite. Mar 24 13:05:45.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:05:45.750: INFO: namespace kubectl-542 deletion completed in 1m28.092443325s • [SLOW TEST:90.490 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:05:45.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-kf4b STEP: Creating a pod to test atomic-volume-subpath Mar 24 13:05:45.826: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kf4b" in namespace "subpath-9700" to be "success or failure" Mar 24 13:05:45.916: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.667127ms Mar 24 13:05:48.042: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215807045s Mar 24 13:05:50.046: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.219911342s Mar 24 13:05:52.050: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 6.224029967s Mar 24 13:05:54.055: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 8.22842407s Mar 24 13:05:56.059: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 10.232793563s Mar 24 13:05:58.064: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 12.237149576s Mar 24 13:06:00.068: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 14.241374415s Mar 24 13:06:02.072: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 16.245457095s Mar 24 13:06:04.076: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 18.249798507s Mar 24 13:06:06.080: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 20.254029813s Mar 24 13:06:08.085: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Running", Reason="", readiness=true. Elapsed: 22.258587555s Mar 24 13:06:10.090: INFO: Pod "pod-subpath-test-configmap-kf4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.263101319s STEP: Saw pod success Mar 24 13:06:10.090: INFO: Pod "pod-subpath-test-configmap-kf4b" satisfied condition "success or failure" Mar 24 13:06:10.093: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-kf4b container test-container-subpath-configmap-kf4b: STEP: delete the pod Mar 24 13:06:10.128: INFO: Waiting for pod pod-subpath-test-configmap-kf4b to disappear Mar 24 13:06:10.153: INFO: Pod pod-subpath-test-configmap-kf4b no longer exists STEP: Deleting pod pod-subpath-test-configmap-kf4b Mar 24 13:06:10.153: INFO: Deleting pod "pod-subpath-test-configmap-kf4b" in namespace "subpath-9700" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:06:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9700" for this suite. Mar 24 13:06:16.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:06:16.286: INFO: namespace subpath-9700 deletion completed in 6.127782025s • [SLOW TEST:30.536 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:06:16.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8b3249c2-d623-47a2-9319-c3eeca316c56 STEP: Creating a pod to test consume secrets Mar 24 13:06:16.411: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31" in namespace "projected-9728" to be "success or failure" Mar 24 13:06:16.422: INFO: Pod "pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31": Phase="Pending", Reason="", readiness=false. Elapsed: 11.517228ms Mar 24 13:06:18.432: INFO: Pod "pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020946932s Mar 24 13:06:20.436: INFO: Pod "pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025000204s STEP: Saw pod success Mar 24 13:06:20.436: INFO: Pod "pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31" satisfied condition "success or failure" Mar 24 13:06:20.439: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31 container projected-secret-volume-test: STEP: delete the pod Mar 24 13:06:20.468: INFO: Waiting for pod pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31 to disappear Mar 24 13:06:20.476: INFO: Pod pod-projected-secrets-39ea1bb7-bf21-4400-8a45-f9bbf6ab0a31 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:06:20.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9728" for this suite. Mar 24 13:06:26.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:06:26.570: INFO: namespace projected-9728 deletion completed in 6.090522566s • [SLOW TEST:10.284 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:06:26.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:06:26.623: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:06:30.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6779" for this suite. Mar 24 13:07:08.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:08.865: INFO: namespace pods-6779 deletion completed in 38.090718677s • [SLOW TEST:42.295 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:08.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:07:08.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23" in namespace "projected-3861" to be "success or failure" Mar 24 13:07:08.942: INFO: Pod "downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23": Phase="Pending", Reason="", readiness=false. Elapsed: 37.575317ms Mar 24 13:07:10.945: INFO: Pod "downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041548174s Mar 24 13:07:12.950: INFO: Pod "downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046076707s STEP: Saw pod success Mar 24 13:07:12.950: INFO: Pod "downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23" satisfied condition "success or failure" Mar 24 13:07:12.954: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23 container client-container: STEP: delete the pod Mar 24 13:07:12.990: INFO: Waiting for pod downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23 to disappear Mar 24 13:07:12.998: INFO: Pod downwardapi-volume-fb22257a-d919-40ca-8439-6f0d7ff4ce23 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:07:12.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3861" for this suite. Mar 24 13:07:19.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:19.120: INFO: namespace projected-3861 deletion completed in 6.118724658s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:19.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:07:19.190: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.821365ms) Mar 24 13:07:19.193: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.897349ms) Mar 24 13:07:19.197: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.846192ms) Mar 24 13:07:19.200: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.846147ms) Mar 24 13:07:19.204: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.489815ms) Mar 24 13:07:19.207: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.747805ms) Mar 24 13:07:19.211: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.861647ms) Mar 24 13:07:19.215: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.479554ms) Mar 24 13:07:19.218: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.191799ms) Mar 24 13:07:19.242: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.627159ms) Mar 24 13:07:19.246: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.349974ms) Mar 24 13:07:19.249: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.265799ms) Mar 24 13:07:19.252: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.015302ms) Mar 24 13:07:19.255: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.617958ms) Mar 24 13:07:19.258: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.56329ms) Mar 24 13:07:19.260: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.441448ms) Mar 24 13:07:19.263: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.141ms) Mar 24 13:07:19.266: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.53956ms) Mar 24 13:07:19.268: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.528824ms) Mar 24 13:07:19.271: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.971421ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:07:19.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1104" for this suite. Mar 24 13:07:25.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:25.390: INFO: namespace proxy-1104 deletion completed in 6.115421678s • [SLOW TEST:6.269 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:25.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:07:25.500: INFO: Creating deployment "test-recreate-deployment" Mar 24 13:07:25.504: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 24 13:07:25.516: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 24 13:07:27.524: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 24 13:07:27.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720652045, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720652045, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720652045, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720652045, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:07:29.531: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 24 13:07:29.538: INFO: Updating deployment test-recreate-deployment Mar 24 13:07:29.538: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 24 13:07:29.765: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3918,SelfLink:/apis/apps/v1/namespaces/deployment-3918/deployments/test-recreate-deployment,UID:f6db7118-c702-415d-959b-d5e55d998b7f,ResourceVersion:1593867,Generation:2,CreationTimestamp:2020-03-24 13:07:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-24 13:07:29 +0000 UTC 2020-03-24 13:07:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-24 13:07:29 +0000 UTC 2020-03-24 13:07:25 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 24 13:07:29.774: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3918,SelfLink:/apis/apps/v1/namespaces/deployment-3918/replicasets/test-recreate-deployment-5c8c9cc69d,UID:156ba2ff-5696-440d-91b4-b9f45cf439e2,ResourceVersion:1593865,Generation:1,CreationTimestamp:2020-03-24 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f6db7118-c702-415d-959b-d5e55d998b7f 0xc001eccfe7 0xc001eccfe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 13:07:29.774: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 24 13:07:29.774: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3918,SelfLink:/apis/apps/v1/namespaces/deployment-3918/replicasets/test-recreate-deployment-6df85df6b9,UID:9c19673a-e39e-471c-a6c9-bc0fd5ad9df9,ResourceVersion:1593857,Generation:2,CreationTimestamp:2020-03-24 13:07:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f6db7118-c702-415d-959b-d5e55d998b7f 0xc001ecd1d7 0xc001ecd1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 13:07:29.777: INFO: Pod "test-recreate-deployment-5c8c9cc69d-krb7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-krb7m,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3918,SelfLink:/api/v1/namespaces/deployment-3918/pods/test-recreate-deployment-5c8c9cc69d-krb7m,UID:62159701-c6cb-424d-8a48-3045c5327c6b,ResourceVersion:1593868,Generation:0,CreationTimestamp:2020-03-24 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 156ba2ff-5696-440d-91b4-b9f45cf439e2 0xc002f82057 0xc002f82058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2xglm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2xglm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2xglm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f820d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f820f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-24 13:07:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:07:29.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3918" for this suite. Mar 24 13:07:35.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:35.897: INFO: namespace deployment-3918 deletion completed in 6.11719607s • [SLOW TEST:10.507 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:35.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7ae75a19-ed17-4bb8-907e-ff6bd629477d STEP: Creating a pod to test consume configMaps Mar 24 13:07:35.971: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97" in namespace "projected-2595" to be "success or failure" Mar 24 13:07:35.974: INFO: Pod "pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.196013ms Mar 24 13:07:37.978: INFO: Pod "pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007228549s Mar 24 13:07:39.982: INFO: Pod "pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01148703s STEP: Saw pod success Mar 24 13:07:39.982: INFO: Pod "pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97" satisfied condition "success or failure" Mar 24 13:07:39.986: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97 container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:07:40.057: INFO: Waiting for pod pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97 to disappear Mar 24 13:07:40.070: INFO: Pod pod-projected-configmaps-1e2e4886-2e5f-4e9a-bb6b-c2762f4ccc97 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:07:40.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2595" for this suite. Mar 24 13:07:46.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:46.226: INFO: namespace projected-2595 deletion completed in 6.151902335s • [SLOW TEST:10.328 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:46.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 24 13:07:46.282: INFO: Waiting up to 5m0s for pod "pod-87d91062-dc49-4b64-b1f2-965c181229ac" in namespace "emptydir-8681" to be "success or failure" Mar 24 13:07:46.302: INFO: Pod "pod-87d91062-dc49-4b64-b1f2-965c181229ac": Phase="Pending", Reason="", readiness=false. Elapsed: 20.379623ms Mar 24 13:07:48.306: INFO: Pod "pod-87d91062-dc49-4b64-b1f2-965c181229ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024258082s Mar 24 13:07:50.311: INFO: Pod "pod-87d91062-dc49-4b64-b1f2-965c181229ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028755001s STEP: Saw pod success Mar 24 13:07:50.311: INFO: Pod "pod-87d91062-dc49-4b64-b1f2-965c181229ac" satisfied condition "success or failure" Mar 24 13:07:50.314: INFO: Trying to get logs from node iruya-worker pod pod-87d91062-dc49-4b64-b1f2-965c181229ac container test-container: STEP: delete the pod Mar 24 13:07:50.345: INFO: Waiting for pod pod-87d91062-dc49-4b64-b1f2-965c181229ac to disappear Mar 24 13:07:50.373: INFO: Pod pod-87d91062-dc49-4b64-b1f2-965c181229ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:07:50.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8681" for this suite. Mar 24 13:07:56.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:07:56.467: INFO: namespace emptydir-8681 deletion completed in 6.091535092s • [SLOW TEST:10.241 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:07:56.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2914 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2914 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2914 Mar 24 13:07:56.567: INFO: Found 0 stateful pods, waiting for 1 Mar 24 13:08:06.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 24 13:08:06.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:08:10.009: INFO: stderr: "I0324 13:08:09.872393 545 log.go:172] (0xc000a26420) (0xc0005d2a00) Create stream\nI0324 13:08:09.872429 545 log.go:172] (0xc000a26420) (0xc0005d2a00) Stream added, broadcasting: 1\nI0324 13:08:09.878761 545 log.go:172] (0xc000a26420) Reply frame received for 1\nI0324 13:08:09.878813 545 log.go:172] (0xc000a26420) (0xc0008ea000) Create stream\nI0324 13:08:09.878833 545 log.go:172] (0xc000a26420) (0xc0008ea000) Stream added, broadcasting: 3\nI0324 13:08:09.879888 545 log.go:172] (0xc000a26420) Reply frame received for 3\nI0324 13:08:09.879926 545 log.go:172] (0xc000a26420) (0xc000964000) Create stream\nI0324 13:08:09.879945 545 log.go:172] (0xc000a26420) (0xc000964000) Stream added, broadcasting: 5\nI0324 13:08:09.880751 545 log.go:172] (0xc000a26420) Reply frame received for 5\nI0324 13:08:09.958430 545 log.go:172] (0xc000a26420) Data frame received for 5\nI0324 13:08:09.958487 545 log.go:172] (0xc000964000) (5) Data frame handling\nI0324 13:08:09.958513 545 log.go:172] (0xc000964000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:08:10.000905 545 log.go:172] (0xc000a26420) Data frame received for 3\nI0324 13:08:10.000940 545 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0324 13:08:10.000962 545 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0324 13:08:10.000976 545 log.go:172] (0xc000a26420) Data frame received for 3\nI0324 13:08:10.000986 545 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0324 13:08:10.001353 545 log.go:172] (0xc000a26420) Data frame received for 5\nI0324 13:08:10.001399 545 log.go:172] (0xc000964000) (5) Data frame handling\nI0324 13:08:10.003364 545 log.go:172] (0xc000a26420) Data frame received for 1\nI0324 13:08:10.003398 545 log.go:172] (0xc0005d2a00) (1) Data frame handling\nI0324 13:08:10.003439 545 log.go:172] (0xc0005d2a00) (1) Data frame sent\nI0324 13:08:10.003485 545 log.go:172] (0xc000a26420) (0xc0005d2a00) Stream removed, broadcasting: 1\nI0324 13:08:10.003527 545 log.go:172] (0xc000a26420) Go away received\nI0324 13:08:10.004116 545 log.go:172] (0xc000a26420) (0xc0005d2a00) Stream removed, broadcasting: 1\nI0324 13:08:10.004147 545 log.go:172] (0xc000a26420) (0xc0008ea000) Stream removed, broadcasting: 3\nI0324 13:08:10.004164 545 log.go:172] (0xc000a26420) (0xc000964000) Stream removed, broadcasting: 5\n" Mar 24 13:08:10.010: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:08:10.010: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 13:08:10.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 24 13:08:20.030: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 13:08:20.030: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:08:20.044: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:20.044: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC }] Mar 24 13:08:20.044: INFO: Mar 24 13:08:20.044: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 24 13:08:21.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996237739s Mar 24 13:08:22.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990557899s Mar 24 13:08:23.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.83539623s Mar 24 13:08:24.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.831687119s Mar 24 13:08:25.240: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.805691248s Mar 24 13:08:26.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.800470603s Mar 24 13:08:27.250: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.794944323s Mar 24 13:08:28.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.789659013s Mar 24 13:08:29.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 784.405542ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2914 Mar 24 13:08:30.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:08:30.487: INFO: stderr: "I0324 13:08:30.405536 579 log.go:172] (0xc000116630) (0xc00095a6e0) Create stream\nI0324 13:08:30.405589 579 log.go:172] (0xc000116630) (0xc00095a6e0) Stream added, broadcasting: 1\nI0324 13:08:30.408799 579 log.go:172] (0xc000116630) Reply frame received for 1\nI0324 13:08:30.408847 579 log.go:172] (0xc000116630) (0xc00010e140) Create stream\nI0324 13:08:30.408859 579 log.go:172] (0xc000116630) (0xc00010e140) Stream added, broadcasting: 3\nI0324 13:08:30.409868 579 log.go:172] (0xc000116630) Reply frame received for 3\nI0324 13:08:30.409905 579 log.go:172] (0xc000116630) (0xc000786000) Create stream\nI0324 13:08:30.409921 579 log.go:172] (0xc000116630) (0xc000786000) Stream added, broadcasting: 5\nI0324 13:08:30.410883 579 log.go:172] (0xc000116630) Reply frame received for 5\nI0324 13:08:30.481670 579 log.go:172] (0xc000116630) Data frame received for 3\nI0324 13:08:30.481690 579 log.go:172] (0xc00010e140) (3) Data frame handling\nI0324 13:08:30.481708 579 log.go:172] (0xc00010e140) (3) Data frame sent\nI0324 13:08:30.481796 579 log.go:172] (0xc000116630) Data frame received for 5\nI0324 13:08:30.481810 579 log.go:172] (0xc000786000) (5) Data frame handling\nI0324 13:08:30.481826 579 log.go:172] (0xc000786000) (5) Data frame sent\nI0324 13:08:30.481840 579 log.go:172] (0xc000116630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 13:08:30.481854 579 log.go:172] (0xc000116630) Data frame received for 3\nI0324 13:08:30.481868 579 log.go:172] (0xc00010e140) (3) Data frame handling\nI0324 13:08:30.481887 579 log.go:172] (0xc000786000) (5) Data frame handling\nI0324 13:08:30.483237 579 log.go:172] (0xc000116630) Data frame received for 1\nI0324 13:08:30.483248 579 log.go:172] (0xc00095a6e0) (1) Data frame handling\nI0324 13:08:30.483253 579 log.go:172] (0xc00095a6e0) (1) Data frame sent\nI0324 13:08:30.483345 579 log.go:172] (0xc000116630) (0xc00095a6e0) Stream removed, broadcasting: 1\nI0324 13:08:30.483421 579 log.go:172] (0xc000116630) Go away received\nI0324 13:08:30.483587 579 log.go:172] (0xc000116630) (0xc00095a6e0) Stream removed, broadcasting: 1\nI0324 13:08:30.483599 579 log.go:172] (0xc000116630) (0xc00010e140) Stream removed, broadcasting: 3\nI0324 13:08:30.483609 579 log.go:172] (0xc000116630) (0xc000786000) Stream removed, broadcasting: 5\n" Mar 24 13:08:30.487: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 13:08:30.487: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 13:08:30.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:08:30.699: INFO: stderr: "I0324 13:08:30.619777 601 log.go:172] (0xc000a3e630) (0xc0002fab40) Create stream\nI0324 13:08:30.619843 601 log.go:172] (0xc000a3e630) (0xc0002fab40) Stream added, broadcasting: 1\nI0324 13:08:30.624257 601 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0324 13:08:30.624293 601 log.go:172] (0xc000a3e630) (0xc0002fa280) Create stream\nI0324 13:08:30.624304 601 log.go:172] (0xc000a3e630) (0xc0002fa280) Stream added, broadcasting: 3\nI0324 13:08:30.625663 601 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0324 13:08:30.625707 601 log.go:172] (0xc000a3e630) (0xc0001d4000) Create stream\nI0324 13:08:30.625796 601 log.go:172] (0xc000a3e630) (0xc0001d4000) Stream added, broadcasting: 5\nI0324 13:08:30.626772 601 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0324 13:08:30.691338 601 log.go:172] (0xc000a3e630) Data frame received for 3\nI0324 13:08:30.691382 601 log.go:172] (0xc0002fa280) (3) Data frame handling\nI0324 13:08:30.691407 601 log.go:172] (0xc0002fa280) (3) Data frame sent\nI0324 13:08:30.691446 601 log.go:172] (0xc000a3e630) Data frame received for 3\nI0324 13:08:30.691464 601 log.go:172] (0xc0002fa280) (3) Data frame handling\nI0324 13:08:30.691493 601 log.go:172] (0xc000a3e630) Data frame received for 5\nI0324 13:08:30.691509 601 log.go:172] (0xc0001d4000) (5) Data frame handling\nI0324 13:08:30.691526 601 log.go:172] (0xc0001d4000) (5) Data frame sent\nI0324 13:08:30.691541 601 log.go:172] (0xc000a3e630) Data frame received for 5\nI0324 13:08:30.691556 601 log.go:172] (0xc0001d4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0324 13:08:30.693858 601 log.go:172] (0xc000a3e630) Data frame received for 1\nI0324 13:08:30.693882 601 log.go:172] (0xc0002fab40) (1) Data frame handling\nI0324 13:08:30.693898 601 log.go:172] (0xc0002fab40) (1) Data frame sent\nI0324 13:08:30.693914 601 log.go:172] (0xc000a3e630) (0xc0002fab40) Stream removed, broadcasting: 1\nI0324 13:08:30.693929 601 log.go:172] (0xc000a3e630) Go away received\nI0324 13:08:30.694423 601 log.go:172] (0xc000a3e630) (0xc0002fab40) Stream removed, broadcasting: 1\nI0324 13:08:30.694454 601 log.go:172] (0xc000a3e630) (0xc0002fa280) Stream removed, broadcasting: 3\nI0324 13:08:30.694467 601 log.go:172] (0xc000a3e630) (0xc0001d4000) Stream removed, broadcasting: 5\n" Mar 24 13:08:30.699: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 13:08:30.699: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 13:08:30.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:08:30.937: INFO: stderr: "I0324 13:08:30.875923 623 log.go:172] (0xc0008e6630) (0xc0005cc960) Create stream\nI0324 13:08:30.875992 623 log.go:172] (0xc0008e6630) (0xc0005cc960) Stream added, broadcasting: 1\nI0324 13:08:30.878850 623 log.go:172] (0xc0008e6630) Reply frame received for 1\nI0324 13:08:30.878886 623 log.go:172] (0xc0008e6630) (0xc0005cca00) Create stream\nI0324 13:08:30.878895 623 log.go:172] (0xc0008e6630) (0xc0005cca00) Stream added, broadcasting: 3\nI0324 13:08:30.879702 623 log.go:172] (0xc0008e6630) Reply frame received for 3\nI0324 13:08:30.879744 623 log.go:172] (0xc0008e6630) (0xc0005ccaa0) Create stream\nI0324 13:08:30.879770 623 log.go:172] (0xc0008e6630) (0xc0005ccaa0) Stream added, broadcasting: 5\nI0324 13:08:30.880718 623 log.go:172] (0xc0008e6630) Reply frame received for 5\nI0324 13:08:30.930863 623 log.go:172] (0xc0008e6630) Data frame received for 3\nI0324 13:08:30.930903 623 log.go:172] (0xc0005cca00) (3) Data frame handling\nI0324 13:08:30.930917 623 log.go:172] (0xc0005cca00) (3) Data frame sent\nI0324 13:08:30.930927 623 log.go:172] (0xc0008e6630) Data frame received for 3\nI0324 13:08:30.930937 623 log.go:172] (0xc0005cca00) (3) Data frame handling\nI0324 13:08:30.930997 623 log.go:172] (0xc0008e6630) Data frame received for 5\nI0324 13:08:30.931043 623 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0324 13:08:30.931081 623 log.go:172] (0xc0005ccaa0) (5) Data frame sent\nI0324 13:08:30.931095 623 log.go:172] (0xc0008e6630) Data frame received for 5\nI0324 13:08:30.931110 623 log.go:172] (0xc0005ccaa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0324 13:08:30.932482 623 log.go:172] (0xc0008e6630) Data frame received for 1\nI0324 13:08:30.932502 623 log.go:172] (0xc0005cc960) (1) Data frame handling\nI0324 13:08:30.932512 623 log.go:172] (0xc0005cc960) (1) Data frame sent\nI0324 13:08:30.932530 623 log.go:172] (0xc0008e6630) (0xc0005cc960) Stream removed, broadcasting: 1\nI0324 13:08:30.932551 623 log.go:172] (0xc0008e6630) Go away received\nI0324 13:08:30.933073 623 log.go:172] (0xc0008e6630) (0xc0005cc960) Stream removed, broadcasting: 1\nI0324 13:08:30.933094 623 log.go:172] (0xc0008e6630) (0xc0005cca00) Stream removed, broadcasting: 3\nI0324 13:08:30.933104 623 log.go:172] (0xc0008e6630) (0xc0005ccaa0) Stream removed, broadcasting: 5\n" Mar 24 13:08:30.937: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 13:08:30.937: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 13:08:30.941: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:08:30.941: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:08:30.941: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 24 13:08:30.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:08:31.150: INFO: stderr: "I0324 13:08:31.078873 643 log.go:172] (0xc000a36420) (0xc0005f8a00) Create stream\nI0324 13:08:31.078948 643 log.go:172] (0xc000a36420) (0xc0005f8a00) Stream added, broadcasting: 1\nI0324 13:08:31.084187 643 log.go:172] (0xc000a36420) Reply frame received for 1\nI0324 13:08:31.084241 643 log.go:172] (0xc000a36420) (0xc0005f8320) Create stream\nI0324 13:08:31.084258 643 log.go:172] (0xc000a36420) (0xc0005f8320) Stream added, broadcasting: 3\nI0324 13:08:31.085377 643 log.go:172] (0xc000a36420) Reply frame received for 3\nI0324 13:08:31.085421 643 log.go:172] (0xc000a36420) (0xc000324000) Create stream\nI0324 13:08:31.085437 643 log.go:172] (0xc000a36420) (0xc000324000) Stream added, broadcasting: 5\nI0324 13:08:31.086409 643 log.go:172] (0xc000a36420) Reply frame received for 5\nI0324 13:08:31.143150 643 log.go:172] (0xc000a36420) Data frame received for 5\nI0324 13:08:31.143185 643 log.go:172] (0xc000a36420) Data frame received for 3\nI0324 13:08:31.143218 643 log.go:172] (0xc0005f8320) (3) Data frame handling\nI0324 13:08:31.143244 643 log.go:172] (0xc0005f8320) (3) Data frame sent\nI0324 13:08:31.143309 643 log.go:172] (0xc000a36420) Data frame received for 3\nI0324 13:08:31.143339 643 log.go:172] (0xc0005f8320) (3) Data frame handling\nI0324 13:08:31.143387 643 log.go:172] (0xc000324000) (5) Data frame handling\nI0324 13:08:31.143424 643 log.go:172] (0xc000324000) (5) Data frame sent\nI0324 13:08:31.143437 643 log.go:172] (0xc000a36420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:08:31.143452 643 log.go:172] (0xc000324000) (5) Data frame handling\nI0324 13:08:31.145343 643 log.go:172] (0xc000a36420) Data frame received for 1\nI0324 13:08:31.145383 643 log.go:172] (0xc0005f8a00) (1) Data frame handling\nI0324 13:08:31.145404 643 log.go:172] (0xc0005f8a00) (1) Data frame sent\nI0324 13:08:31.145429 643 log.go:172] (0xc000a36420) (0xc0005f8a00) Stream removed, broadcasting: 1\nI0324 13:08:31.145565 643 log.go:172] (0xc000a36420) Go away received\nI0324 13:08:31.146091 643 log.go:172] (0xc000a36420) (0xc0005f8a00) Stream removed, broadcasting: 1\nI0324 13:08:31.146122 643 log.go:172] (0xc000a36420) (0xc0005f8320) Stream removed, broadcasting: 3\nI0324 13:08:31.146140 643 log.go:172] (0xc000a36420) (0xc000324000) Stream removed, broadcasting: 5\n" Mar 24 13:08:31.151: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:08:31.151: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 13:08:31.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:08:31.355: INFO: stderr: "I0324 13:08:31.269820 665 log.go:172] (0xc000130dc0) (0xc0002fe820) Create stream\nI0324 13:08:31.269868 665 log.go:172] (0xc000130dc0) (0xc0002fe820) Stream added, broadcasting: 1\nI0324 13:08:31.272189 665 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0324 13:08:31.273300 665 log.go:172] (0xc000130dc0) (0xc0007ca000) Create stream\nI0324 13:08:31.273407 665 log.go:172] (0xc000130dc0) (0xc0007ca000) Stream added, broadcasting: 3\nI0324 13:08:31.274700 665 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0324 13:08:31.274752 665 log.go:172] (0xc000130dc0) (0xc0002fe000) Create stream\nI0324 13:08:31.274770 665 log.go:172] (0xc000130dc0) (0xc0002fe000) Stream added, broadcasting: 5\nI0324 13:08:31.275619 665 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0324 13:08:31.324399 665 log.go:172] (0xc000130dc0) Data frame received for 5\nI0324 13:08:31.324430 665 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0324 13:08:31.324448 665 log.go:172] (0xc0002fe000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:08:31.349983 665 log.go:172] (0xc000130dc0) Data frame received for 5\nI0324 13:08:31.350060 665 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0324 13:08:31.350204 665 log.go:172] (0xc000130dc0) Data frame received for 3\nI0324 13:08:31.350243 665 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0324 13:08:31.350284 665 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0324 13:08:31.350301 665 log.go:172] (0xc000130dc0) Data frame received for 3\nI0324 13:08:31.350316 665 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0324 13:08:31.351832 665 log.go:172] (0xc000130dc0) Data frame received for 1\nI0324 13:08:31.351846 665 log.go:172] (0xc0002fe820) (1) Data frame handling\nI0324 13:08:31.351852 665 log.go:172] (0xc0002fe820) (1) Data frame sent\nI0324 13:08:31.351861 665 log.go:172] (0xc000130dc0) (0xc0002fe820) Stream removed, broadcasting: 1\nI0324 13:08:31.352021 665 log.go:172] (0xc000130dc0) Go away received\nI0324 13:08:31.352117 665 log.go:172] (0xc000130dc0) (0xc0002fe820) Stream removed, broadcasting: 1\nI0324 13:08:31.352138 665 log.go:172] (0xc000130dc0) (0xc0007ca000) Stream removed, broadcasting: 3\nI0324 13:08:31.352148 665 log.go:172] (0xc000130dc0) (0xc0002fe000) Stream removed, broadcasting: 5\n" Mar 24 13:08:31.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:08:31.355: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 13:08:31.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:08:31.608: INFO: stderr: "I0324 13:08:31.484472 688 log.go:172] (0xc000116fd0) (0xc000640960) Create stream\nI0324 13:08:31.484549 688 log.go:172] (0xc000116fd0) (0xc000640960) Stream added, broadcasting: 1\nI0324 13:08:31.487429 688 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0324 13:08:31.487473 688 log.go:172] (0xc000116fd0) (0xc0003a0140) Create stream\nI0324 13:08:31.487486 688 log.go:172] (0xc000116fd0) (0xc0003a0140) Stream added, broadcasting: 3\nI0324 13:08:31.488442 688 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0324 13:08:31.488476 688 log.go:172] (0xc000116fd0) (0xc000640a00) Create stream\nI0324 13:08:31.488488 688 log.go:172] (0xc000116fd0) (0xc000640a00) Stream added, broadcasting: 5\nI0324 13:08:31.489529 688 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0324 13:08:31.568870 688 log.go:172] (0xc000116fd0) Data frame received for 5\nI0324 13:08:31.568900 688 log.go:172] (0xc000640a00) (5) Data frame handling\nI0324 13:08:31.568919 688 log.go:172] (0xc000640a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:08:31.601865 688 log.go:172] (0xc000116fd0) Data frame received for 3\nI0324 13:08:31.601918 688 log.go:172] (0xc0003a0140) (3) Data frame handling\nI0324 13:08:31.601952 688 log.go:172] (0xc0003a0140) (3) Data frame sent\nI0324 13:08:31.601974 688 log.go:172] (0xc000116fd0) Data frame received for 3\nI0324 13:08:31.601984 688 log.go:172] (0xc0003a0140) (3) Data frame handling\nI0324 13:08:31.602108 688 log.go:172] (0xc000116fd0) Data frame received for 5\nI0324 13:08:31.602134 688 log.go:172] (0xc000640a00) (5) Data frame handling\nI0324 13:08:31.603669 688 log.go:172] (0xc000116fd0) Data frame received for 1\nI0324 13:08:31.603683 688 log.go:172] (0xc000640960) (1) Data frame handling\nI0324 13:08:31.603689 688 log.go:172] (0xc000640960) (1) Data frame sent\nI0324 13:08:31.603722 688 log.go:172] (0xc000116fd0) (0xc000640960) Stream removed, broadcasting: 1\nI0324 13:08:31.603736 688 log.go:172] (0xc000116fd0) Go away received\nI0324 13:08:31.604130 688 log.go:172] (0xc000116fd0) (0xc000640960) Stream removed, broadcasting: 1\nI0324 13:08:31.604152 688 log.go:172] (0xc000116fd0) (0xc0003a0140) Stream removed, broadcasting: 3\nI0324 13:08:31.604161 688 log.go:172] (0xc000116fd0) (0xc000640a00) Stream removed, broadcasting: 5\n" Mar 24 13:08:31.608: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:08:31.608: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 13:08:31.608: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:08:31.611: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 24 13:08:41.620: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 13:08:41.620: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 24 13:08:41.620: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 24 13:08:41.632: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:41.632: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC }] Mar 24 13:08:41.632: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:41.632: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:41.632: INFO: Mar 24 13:08:41.632: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 13:08:42.637: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:42.637: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC }] Mar 24 13:08:42.637: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:42.637: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:42.637: INFO: Mar 24 13:08:42.637: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 13:08:43.642: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:43.642: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC }] Mar 24 13:08:43.642: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:43.642: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:43.642: INFO: Mar 24 13:08:43.642: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 13:08:44.647: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:44.647: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:07:56 +0000 UTC }] Mar 24 13:08:44.647: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:44.647: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:44.647: INFO: Mar 24 13:08:44.647: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 13:08:45.652: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:45.652: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:45.652: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:45.652: INFO: Mar 24 13:08:45.652: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 24 13:08:46.658: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:46.658: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:46.658: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:46.658: INFO: Mar 24 13:08:46.658: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 24 13:08:47.663: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:47.663: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:47.663: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:47.663: INFO: Mar 24 13:08:47.663: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 24 13:08:48.669: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:48.669: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:48.669: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:48.669: INFO: Mar 24 13:08:48.669: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 24 13:08:49.674: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:49.674: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:49.674: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:49.674: INFO: Mar 24 13:08:49.674: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 24 13:08:50.679: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:08:50.679: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:50.679: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:08:20 +0000 UTC }] Mar 24 13:08:50.679: INFO: Mar 24 13:08:50.679: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2914 Mar 24 13:08:51.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:08:51.848: INFO: rc: 1 Mar 24 13:08:51.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002abc630 exit status 1 true [0xc002072020 0xc0020720b0 0xc002072188] [0xc002072020 0xc0020720b0 0xc002072188] [0xc002072048 0xc002072128] [0xba70e0 0xba70e0] 0xc00289e360 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 24 13:09:01.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:01.936: INFO: rc: 1 Mar 24 13:09:01.936: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fc0090 exit status 1 true [0xc0005b8048 0xc0005b8158 0xc0005b81d8] [0xc0005b8048 0xc0005b8158 0xc0005b81d8] [0xc0005b8120 0xc0005b81c8] [0xba70e0 0xba70e0] 0xc003088240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:09:11.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:12.042: INFO: rc: 1 Mar 24 13:09:12.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022660f0 exit status 1 true [0xc00093c180 0xc00093c4c0 0xc00093c918] [0xc00093c180 0xc00093c4c0 0xc00093c918] [0xc00093c220 0xc00093c658] [0xba70e0 0xba70e0] 0xc002666900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:09:22.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:22.150: INFO: rc: 1 Mar 24 13:09:22.150: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fc0180 exit status 1 true [0xc0005b81e8 0xc0005b82a0 0xc0005b8378] [0xc0005b81e8 0xc0005b82a0 0xc0005b8378] [0xc0005b8290 0xc0005b82f0] [0xba70e0 0xba70e0] 0xc0030887e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:09:32.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:32.251: INFO: rc: 1 Mar 24 13:09:32.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022661b0 exit status 1 true [0xc00093c950 0xc00093cd80 0xc00093cfd8] [0xc00093c950 0xc00093cd80 0xc00093cfd8] [0xc00093cb30 0xc00093cf58] [0xba70e0 0xba70e0] 0xc002666c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:09:42.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:42.347: INFO: rc: 1 Mar 24 13:09:42.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002abc6f0 exit status 1 true [0xc002072198 0xc0020721f8 0xc002072210] [0xc002072198 0xc0020721f8 0xc002072210] [0xc0020721e8 0xc002072208] [0xba70e0 0xba70e0] 0xc00289e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:09:52.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:09:52.434: INFO: rc: 1 Mar 24 13:09:52.434: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d56090 exit status 1 true [0xc000a04148 0xc000a04200 0xc000a043b0] [0xc000a04148 0xc000a04200 0xc000a043b0] [0xc000a041f0 0xc000a042c0] [0xba70e0 0xba70e0] 0xc001790600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:02.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:02.530: INFO: rc: 1 Mar 24 13:10:02.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022662a0 exit status 1 true [0xc00093d058 0xc00093d410 0xc00093d520] [0xc00093d058 0xc00093d410 0xc00093d520] [0xc00093d400 0xc00093d4d8] [0xba70e0 0xba70e0] 0xc002667140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:12.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:12.628: INFO: rc: 1 Mar 24 13:10:12.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d561b0 exit status 1 true [0xc000a043e8 0xc000a04460 0xc000a046c0] [0xc000a043e8 0xc000a04460 0xc000a046c0] [0xc000a04440 0xc000a045e0] [0xba70e0 0xba70e0] 0xc001790fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:22.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:22.730: INFO: rc: 1 Mar 24 13:10:22.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002266360 exit status 1 true [0xc00093d630 0xc00093d910 0xc00093da98] [0xc00093d630 0xc00093d910 0xc00093da98] [0xc00093d828 0xc00093da18] [0xba70e0 0xba70e0] 0xc002667aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:32.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:32.828: INFO: rc: 1 Mar 24 13:10:32.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d562d0 exit status 1 true [0xc000a04708 0xc000a04838 0xc000a04968] [0xc000a04708 0xc000a04838 0xc000a04968] [0xc000a04748 0xc000a04938] [0xba70e0 0xba70e0] 0xc001c44b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:42.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:42.965: INFO: rc: 1 Mar 24 13:10:42.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d56390 exit status 1 true [0xc000a04980 0xc000a04a78 0xc000a04bb8] [0xc000a04980 0xc000a04a78 0xc000a04bb8] [0xc000a04a48 0xc000a04ac8] [0xba70e0 0xba70e0] 0xc001c45740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:10:52.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:10:53.065: INFO: rc: 1 Mar 24 13:10:53.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002b5c0c0 exit status 1 true [0xc000554fd0 0xc000555290 0xc000555470] [0xc000554fd0 0xc000555290 0xc000555470] [0xc0005551f0 0xc000555460] [0xba70e0 0xba70e0] 0xc0027f4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:03.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:03.155: INFO: rc: 1 Mar 24 13:11:03.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002b5c180 exit status 1 true [0xc00093c180 0xc00093c4c0 0xc00093c918] [0xc00093c180 0xc00093c4c0 0xc00093c918] [0xc00093c220 0xc00093c658] [0xba70e0 0xba70e0] 0xc001790600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:13.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:13.245: INFO: rc: 1 Mar 24 13:11:13.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002b5c240 exit status 1 true [0xc00093c950 0xc00093cd80 0xc00093cfd8] [0xc00093c950 0xc00093cd80 0xc00093cfd8] [0xc00093cb30 0xc00093cf58] [0xba70e0 0xba70e0] 0xc001790fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:23.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:23.336: INFO: rc: 1 Mar 24 13:11:23.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d560f0 exit status 1 true [0xc0005554f8 0xc000555750 0xc000555868] [0xc0005554f8 0xc000555750 0xc000555868] [0xc000555698 0xc000555858] [0xba70e0 0xba70e0] 0xc002666900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:33.441: INFO: rc: 1 Mar 24 13:11:33.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d56210 exit status 1 true [0xc000555920 0xc000555a58 0xc000555c30] [0xc000555920 0xc000555a58 0xc000555c30] [0xc0005559b8 0xc000555b70] [0xba70e0 0xba70e0] 0xc002666c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:43.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:43.539: INFO: rc: 1 Mar 24 13:11:43.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fc00f0 exit status 1 true [0xc000a04148 0xc000a04200 0xc000a043b0] [0xc000a04148 0xc000a04200 0xc000a043b0] [0xc000a041f0 0xc000a042c0] [0xba70e0 0xba70e0] 0xc0027f4f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:11:53.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:11:53.652: INFO: rc: 1 Mar 24 13:11:53.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002b5c360 exit status 1 true [0xc00093d058 0xc00093d410 0xc00093d520] [0xc00093d058 0xc00093d410 0xc00093d520] [0xc00093d400 0xc00093d4d8] [0xba70e0 0xba70e0] 0xc001c44b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:03.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:03.750: INFO: rc: 1 Mar 24 13:12:03.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d56300 exit status 1 true [0xc000555c40 0xc000555e58 0xc0005b8048] [0xc000555c40 0xc000555e58 0xc0005b8048] [0xc000555e28 0xc000555f60] [0xba70e0 0xba70e0] 0xc002667140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:13.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:13.844: INFO: rc: 1 Mar 24 13:12:13.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002b5c450 exit status 1 true [0xc00093d630 0xc00093d910 0xc00093da98] [0xc00093d630 0xc00093d910 0xc00093da98] [0xc00093d828 0xc00093da18] [0xba70e0 0xba70e0] 0xc001c45740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:23.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:23.940: INFO: rc: 1 Mar 24 13:12:23.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d563f0 exit status 1 true [0xc0005b80c0 0xc0005b8198 0xc0005b81e8] [0xc0005b80c0 0xc0005b8198 0xc0005b81e8] [0xc0005b8158 0xc0005b81d8] [0xba70e0 0xba70e0] 0xc002667aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:33.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:34.034: INFO: rc: 1 Mar 24 13:12:34.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002266120 exit status 1 true [0xc002072020 0xc0020720b0 0xc002072188] [0xc002072020 0xc0020720b0 0xc002072188] [0xc002072048 0xc002072128] [0xba70e0 0xba70e0] 0xc003088240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:44.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:44.128: INFO: rc: 1 Mar 24 13:12:44.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002266210 exit status 1 true [0xc002072198 0xc0020721f8 0xc002072210] [0xc002072198 0xc0020721f8 0xc002072210] [0xc0020721e8 0xc002072208] [0xba70e0 0xba70e0] 0xc0030887e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:12:54.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:12:54.226: INFO: rc: 1 Mar 24 13:12:54.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002266090 exit status 1 true [0xc000555100 0xc000555340 0xc0005554f8] [0xc000555100 0xc000555340 0xc0005554f8] [0xc000555290 0xc000555470] [0xba70e0 0xba70e0] 0xc001790600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:04.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:04.320: INFO: rc: 1 Mar 24 13:13:04.320: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d56090 exit status 1 true [0xc0000ea0d0 0xc002072048 0xc002072128] [0xc0000ea0d0 0xc002072048 0xc002072128] [0xc002072040 0xc0020720d0] [0xba70e0 0xba70e0] 0xc003088240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:14.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:14.413: INFO: rc: 1 Mar 24 13:13:14.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022661e0 exit status 1 true [0xc000555578 0xc000555768 0xc000555920] [0xc000555578 0xc000555768 0xc000555920] [0xc000555750 0xc000555868] [0xba70e0 0xba70e0] 0xc001790fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:24.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:24.508: INFO: rc: 1 Mar 24 13:13:24.508: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002d561b0 exit status 1 true [0xc002072188 0xc0020721e8 0xc002072208] [0xc002072188 0xc0020721e8 0xc002072208] [0xc0020721b0 0xc002072200] [0xba70e0 0xba70e0] 0xc0030887e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:34.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:34.612: INFO: rc: 1 Mar 24 13:13:34.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022662d0 exit status 1 true [0xc000555948 0xc000555b28 0xc000555c40] [0xc000555948 0xc000555b28 0xc000555c40] [0xc000555a58 0xc000555c30] [0xba70e0 0xba70e0] 0xc002666900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:44.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:44.715: INFO: rc: 1 Mar 24 13:13:44.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fc00c0 exit status 1 true [0xc0005b8048 0xc0005b8158 0xc0005b81d8] [0xc0005b8048 0xc0005b8158 0xc0005b81d8] [0xc0005b8120 0xc0005b81c8] [0xba70e0 0xba70e0] 0xc0027f4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 24 13:13:54.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:13:54.811: INFO: rc: 1 Mar 24 13:13:54.812: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Mar 24 13:13:54.812: INFO: Scaling statefulset ss to 0 Mar 24 13:13:54.866: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 24 13:13:54.868: INFO: Deleting all statefulset in ns statefulset-2914 Mar 24 13:13:54.871: INFO: Scaling statefulset ss to 0 Mar 24 13:13:54.877: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:13:54.879: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:13:54.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2914" for this suite. Mar 24 13:14:00.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:14:00.977: INFO: namespace statefulset-2914 deletion completed in 6.083513316s • [SLOW TEST:364.509 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:14:00.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 24 13:14:01.548: INFO: created pod pod-service-account-defaultsa Mar 24 13:14:01.548: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 24 13:14:01.554: INFO: created pod pod-service-account-mountsa Mar 24 13:14:01.554: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 24 13:14:01.582: INFO: created pod pod-service-account-nomountsa Mar 24 13:14:01.582: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 24 13:14:01.595: INFO: created pod pod-service-account-defaultsa-mountspec Mar 24 13:14:01.595: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 24 13:14:01.655: INFO: created pod pod-service-account-mountsa-mountspec Mar 24 13:14:01.655: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 24 13:14:01.682: INFO: created pod pod-service-account-nomountsa-mountspec Mar 24 13:14:01.682: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 24 13:14:01.691: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 24 13:14:01.691: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 24 13:14:01.793: INFO: created pod pod-service-account-mountsa-nomountspec Mar 24 13:14:01.793: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 24 13:14:01.812: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 24 13:14:01.812: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:14:01.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-225" for this suite. Mar 24 13:14:27.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:14:28.045: INFO: namespace svcaccounts-225 deletion completed in 26.185397791s • [SLOW TEST:27.068 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:14:28.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 24 13:14:28.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 24 13:14:28.315: INFO: stderr: "" Mar 24 13:14:28.315: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:14:28.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5568" for this suite. Mar 24 13:14:34.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:14:34.443: INFO: namespace kubectl-5568 deletion completed in 6.122942737s • [SLOW TEST:6.397 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:14:34.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-5ab0e643-a935-4bed-9aa3-306dd3119416 STEP: Creating a pod to test consume secrets Mar 24 13:14:34.525: INFO: Waiting up to 5m0s for pod "pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0" in namespace "secrets-605" to be "success or failure" Mar 24 13:14:34.536: INFO: Pod "pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.83254ms Mar 24 13:14:36.552: INFO: Pod "pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026868475s Mar 24 13:14:38.556: INFO: Pod "pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031297529s STEP: Saw pod success Mar 24 13:14:38.557: INFO: Pod "pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0" satisfied condition "success or failure" Mar 24 13:14:38.560: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0 container secret-volume-test: STEP: delete the pod Mar 24 13:14:38.594: INFO: Waiting for pod pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0 to disappear Mar 24 13:14:38.602: INFO: Pod pod-secrets-0d2a427a-f3a2-42c7-89e2-50e0f55789a0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:14:38.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-605" for this suite. Mar 24 13:14:44.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:14:44.696: INFO: namespace secrets-605 deletion completed in 6.090650601s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:14:44.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:14:44.783: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 24 13:14:44.790: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:44.794: INFO: Number of nodes with available pods: 0 Mar 24 13:14:44.794: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:14:45.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:45.803: INFO: Number of nodes with available pods: 0 Mar 24 13:14:45.803: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:14:46.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:46.804: INFO: Number of nodes with available pods: 0 Mar 24 13:14:46.804: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:14:47.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:47.958: INFO: Number of nodes with available pods: 0 Mar 24 13:14:47.958: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:14:48.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:48.803: INFO: Number of nodes with available pods: 2 Mar 24 13:14:48.803: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 24 13:14:48.831: INFO: Wrong image for pod: daemon-set-69b72. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:48.831: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:48.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:49.842: INFO: Wrong image for pod: daemon-set-69b72. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:49.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:49.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:50.842: INFO: Wrong image for pod: daemon-set-69b72. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:50.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:50.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:51.864: INFO: Wrong image for pod: daemon-set-69b72. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:51.864: INFO: Pod daemon-set-69b72 is not available Mar 24 13:14:51.864: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:51.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:52.841: INFO: Pod daemon-set-k6gtv is not available Mar 24 13:14:52.841: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:52.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:53.846: INFO: Pod daemon-set-k6gtv is not available Mar 24 13:14:53.846: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:53.849: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:54.842: INFO: Pod daemon-set-k6gtv is not available Mar 24 13:14:54.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:54.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:55.841: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:55.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:56.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:56.842: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:14:56.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:57.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:57.842: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:14:57.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:58.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:58.842: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:14:58.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:14:59.842: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:14:59.842: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:14:59.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:00.843: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:15:00.843: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:15:00.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:01.845: INFO: Wrong image for pod: daemon-set-tbn52. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 24 13:15:01.845: INFO: Pod daemon-set-tbn52 is not available Mar 24 13:15:01.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:02.842: INFO: Pod daemon-set-j2rhp is not available Mar 24 13:15:02.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 24 13:15:02.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:02.851: INFO: Number of nodes with available pods: 1 Mar 24 13:15:02.851: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:15:03.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:03.990: INFO: Number of nodes with available pods: 1 Mar 24 13:15:03.990: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:15:04.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:15:04.875: INFO: Number of nodes with available pods: 2 Mar 24 13:15:04.875: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4661, will wait for the garbage collector to delete the pods Mar 24 13:15:04.949: INFO: Deleting DaemonSet.extensions daemon-set took: 6.41882ms Mar 24 13:15:05.250: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.251719ms Mar 24 13:15:11.954: INFO: Number of nodes with available pods: 0 Mar 24 13:15:11.954: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 13:15:11.956: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4661/daemonsets","resourceVersion":"1595217"},"items":null} Mar 24 13:15:11.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4661/pods","resourceVersion":"1595217"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:15:11.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4661" for this suite. Mar 24 13:15:17.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:15:18.082: INFO: namespace daemonsets-4661 deletion completed in 6.108828688s • [SLOW TEST:33.386 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:15:18.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-bbe3d0c3-cdd8-42f2-b652-d54e7c654d96 STEP: Creating a pod to test consume secrets Mar 24 13:15:18.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560" in namespace "projected-1099" to be "success or failure" Mar 24 13:15:18.161: INFO: Pod "pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560": Phase="Pending", Reason="", readiness=false. Elapsed: 3.671174ms Mar 24 13:15:20.164: INFO: Pod "pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006984339s Mar 24 13:15:22.167: INFO: Pod "pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010219156s STEP: Saw pod success Mar 24 13:15:22.167: INFO: Pod "pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560" satisfied condition "success or failure" Mar 24 13:15:22.170: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560 container projected-secret-volume-test: STEP: delete the pod Mar 24 13:15:22.200: INFO: Waiting for pod pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560 to disappear Mar 24 13:15:22.215: INFO: Pod pod-projected-secrets-aac6e390-c691-4031-adf1-8dfdbb44a560 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:15:22.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1099" for this suite. Mar 24 13:15:28.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:15:28.310: INFO: namespace projected-1099 deletion completed in 6.089739881s • [SLOW TEST:10.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:15:28.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 13:15:32.479: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:15:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4784" for this suite. Mar 24 13:15:38.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:15:38.594: INFO: namespace container-runtime-4784 deletion completed in 6.095059688s • [SLOW TEST:10.283 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:15:38.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:15:38.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b" in namespace "downward-api-327" to be "success or failure" Mar 24 13:15:38.720: INFO: Pod "downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.241709ms Mar 24 13:15:40.750: INFO: Pod "downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07322805s Mar 24 13:15:42.754: INFO: Pod "downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07726287s STEP: Saw pod success Mar 24 13:15:42.754: INFO: Pod "downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b" satisfied condition "success or failure" Mar 24 13:15:42.757: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b container client-container: STEP: delete the pod Mar 24 13:15:42.810: INFO: Waiting for pod downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b to disappear Mar 24 13:15:42.819: INFO: Pod downwardapi-volume-a42ea62d-3056-4135-b587-f1c09308b59b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:15:42.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-327" for this suite. Mar 24 13:15:48.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:15:48.912: INFO: namespace downward-api-327 deletion completed in 6.090053509s • [SLOW TEST:10.317 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:15:48.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 24 13:15:48.980: INFO: Waiting up to 5m0s for pod "pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7" in namespace "emptydir-773" to be "success or failure" Mar 24 13:15:49.011: INFO: Pod "pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.886992ms Mar 24 13:15:51.015: INFO: Pod "pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034996491s Mar 24 13:15:53.019: INFO: Pod "pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039133506s STEP: Saw pod success Mar 24 13:15:53.019: INFO: Pod "pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7" satisfied condition "success or failure" Mar 24 13:15:53.022: INFO: Trying to get logs from node iruya-worker2 pod pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7 container test-container: STEP: delete the pod Mar 24 13:15:53.202: INFO: Waiting for pod pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7 to disappear Mar 24 13:15:53.209: INFO: Pod pod-feef5a3f-9779-4f0b-aff5-d46bde9f25f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:15:53.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-773" for this suite. Mar 24 13:15:59.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:15:59.327: INFO: namespace emptydir-773 deletion completed in 6.113814622s • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:15:59.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 24 13:15:59.365: INFO: namespace kubectl-6758 Mar 24 13:15:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6758' Mar 24 13:15:59.641: INFO: stderr: "" Mar 24 13:15:59.641: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 24 13:16:00.645: INFO: Selector matched 1 pods for map[app:redis] Mar 24 13:16:00.645: INFO: Found 0 / 1 Mar 24 13:16:01.646: INFO: Selector matched 1 pods for map[app:redis] Mar 24 13:16:01.646: INFO: Found 0 / 1 Mar 24 13:16:02.646: INFO: Selector matched 1 pods for map[app:redis] Mar 24 13:16:02.646: INFO: Found 0 / 1 Mar 24 13:16:03.646: INFO: Selector matched 1 pods for map[app:redis] Mar 24 13:16:03.646: INFO: Found 1 / 1 Mar 24 13:16:03.646: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 24 13:16:03.649: INFO: Selector matched 1 pods for map[app:redis] Mar 24 13:16:03.649: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 24 13:16:03.649: INFO: wait on redis-master startup in kubectl-6758 Mar 24 13:16:03.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cgdf2 redis-master --namespace=kubectl-6758' Mar 24 13:16:03.754: INFO: stderr: "" Mar 24 13:16:03.754: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Mar 13:16:01.933 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Mar 13:16:01.933 # Server started, Redis version 3.2.12\n1:M 24 Mar 13:16:01.933 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Mar 13:16:01.933 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 24 13:16:03.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6758' Mar 24 13:16:03.954: INFO: stderr: "" Mar 24 13:16:03.954: INFO: stdout: "service/rm2 exposed\n" Mar 24 13:16:03.958: INFO: Service rm2 in namespace kubectl-6758 found. STEP: exposing service Mar 24 13:16:05.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6758' Mar 24 13:16:06.140: INFO: stderr: "" Mar 24 13:16:06.140: INFO: stdout: "service/rm3 exposed\n" Mar 24 13:16:06.150: INFO: Service rm3 in namespace kubectl-6758 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:16:08.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6758" for this suite. Mar 24 13:16:30.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:16:30.273: INFO: namespace kubectl-6758 deletion completed in 22.114027175s • [SLOW TEST:30.946 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:16:30.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:16:34.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3155" for this suite. Mar 24 13:17:12.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:17:12.535: INFO: namespace kubelet-test-3155 deletion completed in 38.157529866s • [SLOW TEST:42.260 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:17:12.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c383f9be-7131-4396-b93c-644cc7111681 STEP: Creating a pod to test consume configMaps Mar 24 13:17:12.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d" in namespace "configmap-1129" to be "success or failure" Mar 24 13:17:12.599: INFO: Pod "pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134583ms Mar 24 13:17:14.620: INFO: Pod "pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024404953s Mar 24 13:17:16.624: INFO: Pod "pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029045236s STEP: Saw pod success Mar 24 13:17:16.624: INFO: Pod "pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d" satisfied condition "success or failure" Mar 24 13:17:16.627: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d container configmap-volume-test: STEP: delete the pod Mar 24 13:17:16.654: INFO: Waiting for pod pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d to disappear Mar 24 13:17:16.666: INFO: Pod pod-configmaps-0477b550-7813-49b9-a02a-eb9c62bd3e5d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:17:16.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1129" for this suite. Mar 24 13:17:22.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:17:22.752: INFO: namespace configmap-1129 deletion completed in 6.083050104s • [SLOW TEST:10.217 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:17:22.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:17:22.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92" in namespace "downward-api-4262" to be "success or failure" Mar 24 13:17:22.851: INFO: Pod "downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.844354ms Mar 24 13:17:24.855: INFO: Pod "downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010689187s Mar 24 13:17:26.858: INFO: Pod "downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014216496s STEP: Saw pod success Mar 24 13:17:26.859: INFO: Pod "downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92" satisfied condition "success or failure" Mar 24 13:17:26.861: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92 container client-container: STEP: delete the pod Mar 24 13:17:26.895: INFO: Waiting for pod downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92 to disappear Mar 24 13:17:26.899: INFO: Pod downwardapi-volume-43d6f91a-d776-4501-9f69-000c94148b92 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:17:26.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4262" for this suite. Mar 24 13:17:32.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:17:32.986: INFO: namespace downward-api-4262 deletion completed in 6.084596095s • [SLOW TEST:10.233 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:17:32.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2640 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 24 13:17:33.127: INFO: Found 0 stateful pods, waiting for 3 Mar 24 13:17:43.132: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:17:43.132: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:17:43.132: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 24 13:17:53.132: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:17:53.132: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:17:53.132: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 24 13:17:53.160: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 24 13:18:03.197: INFO: Updating stateful set ss2 Mar 24 13:18:03.235: INFO: Waiting for Pod statefulset-2640/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 24 13:18:13.421: INFO: Found 2 stateful pods, waiting for 3 Mar 24 13:18:23.427: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:18:23.427: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:18:23.427: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 24 13:18:23.450: INFO: Updating stateful set ss2 Mar 24 13:18:23.464: INFO: Waiting for Pod statefulset-2640/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 24 13:18:33.491: INFO: Updating stateful set ss2 Mar 24 13:18:33.543: INFO: Waiting for StatefulSet statefulset-2640/ss2 to complete update Mar 24 13:18:33.543: INFO: Waiting for Pod statefulset-2640/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 24 13:18:43.551: INFO: Waiting for StatefulSet statefulset-2640/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 24 13:18:53.551: INFO: Deleting all statefulset in ns statefulset-2640 Mar 24 13:18:53.555: INFO: Scaling statefulset ss2 to 0 Mar 24 13:19:13.571: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:19:13.574: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:19:13.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2640" for this suite. Mar 24 13:19:19.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:19:19.689: INFO: namespace statefulset-2640 deletion completed in 6.091937589s • [SLOW TEST:106.702 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:19:19.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 13:19:25.803: INFO: DNS probes using dns-test-59eb0474-b6a1-4d17-a88e-dc2396686d20 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 13:19:31.926: INFO: File wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local from pod dns-8645/dns-test-123b3a2f-db53-4759-a154-8af931d7f712 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 13:19:31.929: INFO: Lookups using dns-8645/dns-test-123b3a2f-db53-4759-a154-8af931d7f712 failed for: [wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local] Mar 24 13:19:36.934: INFO: File wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local from pod dns-8645/dns-test-123b3a2f-db53-4759-a154-8af931d7f712 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 13:19:36.938: INFO: File jessie_udp@dns-test-service-3.dns-8645.svc.cluster.local from pod dns-8645/dns-test-123b3a2f-db53-4759-a154-8af931d7f712 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 13:19:36.938: INFO: Lookups using dns-8645/dns-test-123b3a2f-db53-4759-a154-8af931d7f712 failed for: [wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local jessie_udp@dns-test-service-3.dns-8645.svc.cluster.local] Mar 24 13:19:41.937: INFO: DNS probes using dns-test-123b3a2f-db53-4759-a154-8af931d7f712 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8645.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8645.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 13:19:48.917: INFO: DNS probes using dns-test-2823241a-2548-447e-85ef-cffdf9c94404 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:19:49.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8645" for this suite. Mar 24 13:19:55.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:19:55.286: INFO: namespace dns-8645 deletion completed in 6.105925227s • [SLOW TEST:35.597 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:19:55.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-5f57cce3-a752-46bb-bf0d-1d2dd5c7c2bb in namespace container-probe-7189 Mar 24 13:19:59.360: INFO: Started pod liveness-5f57cce3-a752-46bb-bf0d-1d2dd5c7c2bb in namespace container-probe-7189 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 13:19:59.363: INFO: Initial restart count of pod liveness-5f57cce3-a752-46bb-bf0d-1d2dd5c7c2bb is 0 Mar 24 13:20:23.419: INFO: Restart count of pod container-probe-7189/liveness-5f57cce3-a752-46bb-bf0d-1d2dd5c7c2bb is now 1 (24.055519245s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:20:23.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7189" for this suite. Mar 24 13:20:29.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:20:29.558: INFO: namespace container-probe-7189 deletion completed in 6.105827723s • [SLOW TEST:34.271 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:20:29.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 24 13:20:29.630: INFO: Waiting up to 5m0s for pod "downward-api-36a13f5a-54db-4956-8081-4069e49b96b1" in namespace "downward-api-8879" to be "success or failure" Mar 24 13:20:29.682: INFO: Pod "downward-api-36a13f5a-54db-4956-8081-4069e49b96b1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.781981ms Mar 24 13:20:31.686: INFO: Pod "downward-api-36a13f5a-54db-4956-8081-4069e49b96b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056107521s Mar 24 13:20:33.691: INFO: Pod "downward-api-36a13f5a-54db-4956-8081-4069e49b96b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060406953s STEP: Saw pod success Mar 24 13:20:33.691: INFO: Pod "downward-api-36a13f5a-54db-4956-8081-4069e49b96b1" satisfied condition "success or failure" Mar 24 13:20:33.694: INFO: Trying to get logs from node iruya-worker pod downward-api-36a13f5a-54db-4956-8081-4069e49b96b1 container dapi-container: STEP: delete the pod Mar 24 13:20:33.713: INFO: Waiting for pod downward-api-36a13f5a-54db-4956-8081-4069e49b96b1 to disappear Mar 24 13:20:33.760: INFO: Pod downward-api-36a13f5a-54db-4956-8081-4069e49b96b1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:20:33.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8879" for this suite. Mar 24 13:20:39.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:20:39.859: INFO: namespace downward-api-8879 deletion completed in 6.094097838s • [SLOW TEST:10.300 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:20:39.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4498.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4498.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 13:20:45.982: INFO: DNS probes using dns-4498/dns-test-4c2abf2c-cab6-46c0-a870-f08bdaddaf22 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:20:45.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4498" for this suite. Mar 24 13:20:52.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:20:52.136: INFO: namespace dns-4498 deletion completed in 6.098912667s • [SLOW TEST:12.277 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:20:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:20:56.300: INFO: Waiting up to 5m0s for pod "client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8" in namespace "pods-4453" to be "success or failure" Mar 24 13:20:56.306: INFO: Pod "client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.280401ms Mar 24 13:20:58.315: INFO: Pod "client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014740534s Mar 24 13:21:00.319: INFO: Pod "client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018520127s STEP: Saw pod success Mar 24 13:21:00.319: INFO: Pod "client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8" satisfied condition "success or failure" Mar 24 13:21:00.322: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8 container env3cont: STEP: delete the pod Mar 24 13:21:00.356: INFO: Waiting for pod client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8 to disappear Mar 24 13:21:00.366: INFO: Pod client-envvars-b5e25ec8-4e03-463d-afaf-48f37d5417b8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:21:00.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4453" for this suite. Mar 24 13:21:46.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:21:46.477: INFO: namespace pods-4453 deletion completed in 46.106946733s • [SLOW TEST:54.340 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:21:46.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-8b8a0a03-7c53-44fd-9a13-1d7b423c6c13 STEP: Creating a pod to test consume secrets Mar 24 13:21:46.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede" in namespace "projected-3206" to be "success or failure" Mar 24 13:21:46.552: INFO: Pod "pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039126ms Mar 24 13:21:48.556: INFO: Pod "pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007663519s Mar 24 13:21:50.560: INFO: Pod "pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011781617s STEP: Saw pod success Mar 24 13:21:50.560: INFO: Pod "pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede" satisfied condition "success or failure" Mar 24 13:21:50.564: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede container projected-secret-volume-test: STEP: delete the pod Mar 24 13:21:50.586: INFO: Waiting for pod pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede to disappear Mar 24 13:21:50.588: INFO: Pod pod-projected-secrets-598e38fe-79d1-402a-aa70-3216533f6ede no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:21:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3206" for this suite. Mar 24 13:21:56.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:21:56.684: INFO: namespace projected-3206 deletion completed in 6.092768598s • [SLOW TEST:10.206 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:21:56.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:21:56.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3276" for this suite. Mar 24 13:22:18.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:22:18.945: INFO: namespace pods-3276 deletion completed in 22.129659154s • [SLOW TEST:22.260 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:22:18.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:22:19.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2" in namespace "downward-api-7322" to be "success or failure" Mar 24 13:22:19.059: INFO: Pod "downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.671458ms Mar 24 13:22:21.063: INFO: Pod "downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011842832s Mar 24 13:22:23.073: INFO: Pod "downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021501636s STEP: Saw pod success Mar 24 13:22:23.073: INFO: Pod "downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2" satisfied condition "success or failure" Mar 24 13:22:23.076: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2 container client-container: STEP: delete the pod Mar 24 13:22:23.107: INFO: Waiting for pod downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2 to disappear Mar 24 13:22:23.120: INFO: Pod downwardapi-volume-49c53364-2825-4e0e-9177-13892e9875d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:22:23.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7322" for this suite. Mar 24 13:22:29.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:22:29.243: INFO: namespace downward-api-7322 deletion completed in 6.118780086s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:22:29.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:22:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7731" for this suite. Mar 24 13:22:35.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:22:35.414: INFO: namespace services-7731 deletion completed in 6.085372287s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.171 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:22:35.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 24 13:22:35.477: INFO: Waiting up to 5m0s for pod "client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6" in namespace "containers-2004" to be "success or failure" Mar 24 13:22:35.492: INFO: Pod "client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.124618ms Mar 24 13:22:37.496: INFO: Pod "client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018492377s Mar 24 13:22:39.500: INFO: Pod "client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02315245s STEP: Saw pod success Mar 24 13:22:39.500: INFO: Pod "client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6" satisfied condition "success or failure" Mar 24 13:22:39.503: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6 container test-container: STEP: delete the pod Mar 24 13:22:39.526: INFO: Waiting for pod client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6 to disappear Mar 24 13:22:39.529: INFO: Pod client-containers-5d38ebc0-271a-47a9-be2f-4ef49f0885c6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:22:39.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2004" for this suite. Mar 24 13:22:45.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:22:45.637: INFO: namespace containers-2004 deletion completed in 6.10569887s • [SLOW TEST:10.222 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:22:45.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5230 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 13:22:45.681: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 24 13:23:11.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.46:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5230 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 13:23:11.827: INFO: >>> kubeConfig: /root/.kube/config I0324 13:23:11.863222 6 log.go:172] (0xc000b0d340) (0xc00149e6e0) Create stream I0324 13:23:11.863254 6 log.go:172] (0xc000b0d340) (0xc00149e6e0) Stream added, broadcasting: 1 I0324 13:23:11.869854 6 log.go:172] (0xc000b0d340) Reply frame received for 1 I0324 13:23:11.869896 6 log.go:172] (0xc000b0d340) (0xc0012ea320) Create stream I0324 13:23:11.869910 6 log.go:172] (0xc000b0d340) (0xc0012ea320) Stream added, broadcasting: 3 I0324 13:23:11.871245 6 log.go:172] (0xc000b0d340) Reply frame received for 3 I0324 13:23:11.871288 6 log.go:172] (0xc000b0d340) (0xc001610000) Create stream I0324 13:23:11.871304 6 log.go:172] (0xc000b0d340) (0xc001610000) Stream added, broadcasting: 5 I0324 13:23:11.872961 6 log.go:172] (0xc000b0d340) Reply frame received for 5 I0324 13:23:11.965908 6 log.go:172] (0xc000b0d340) Data frame received for 5 I0324 13:23:11.965952 6 log.go:172] (0xc001610000) (5) Data frame handling I0324 13:23:11.965982 6 log.go:172] (0xc000b0d340) Data frame received for 3 I0324 13:23:11.965995 6 log.go:172] (0xc0012ea320) (3) Data frame handling I0324 13:23:11.966010 6 log.go:172] (0xc0012ea320) (3) Data frame sent I0324 13:23:11.966021 6 log.go:172] (0xc000b0d340) Data frame received for 3 I0324 13:23:11.966032 6 log.go:172] (0xc0012ea320) (3) Data frame handling I0324 13:23:11.967786 6 log.go:172] (0xc000b0d340) Data frame received for 1 I0324 13:23:11.967820 6 log.go:172] (0xc00149e6e0) (1) Data frame handling I0324 13:23:11.967854 6 log.go:172] (0xc00149e6e0) (1) Data frame sent I0324 13:23:11.967880 6 log.go:172] (0xc000b0d340) (0xc00149e6e0) Stream removed, broadcasting: 1 I0324 13:23:11.967993 6 log.go:172] (0xc000b0d340) Go away received I0324 13:23:11.968089 6 log.go:172] (0xc000b0d340) (0xc00149e6e0) Stream removed, broadcasting: 1 I0324 13:23:11.968126 6 log.go:172] (0xc000b0d340) (0xc0012ea320) Stream removed, broadcasting: 3 I0324 13:23:11.968149 6 log.go:172] (0xc000b0d340) (0xc001610000) Stream removed, broadcasting: 5 Mar 24 13:23:11.968: INFO: Found all expected endpoints: [netserver-0] Mar 24 13:23:11.971: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.174:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5230 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 13:23:11.971: INFO: >>> kubeConfig: /root/.kube/config I0324 13:23:12.004470 6 log.go:172] (0xc000b0dad0) (0xc00149eaa0) Create stream I0324 13:23:12.004499 6 log.go:172] (0xc000b0dad0) (0xc00149eaa0) Stream added, broadcasting: 1 I0324 13:23:12.006849 6 log.go:172] (0xc000b0dad0) Reply frame received for 1 I0324 13:23:12.006896 6 log.go:172] (0xc000b0dad0) (0xc001610280) Create stream I0324 13:23:12.006911 6 log.go:172] (0xc000b0dad0) (0xc001610280) Stream added, broadcasting: 3 I0324 13:23:12.007946 6 log.go:172] (0xc000b0dad0) Reply frame received for 3 I0324 13:23:12.007996 6 log.go:172] (0xc000b0dad0) (0xc000b72280) Create stream I0324 13:23:12.008010 6 log.go:172] (0xc000b0dad0) (0xc000b72280) Stream added, broadcasting: 5 I0324 13:23:12.008897 6 log.go:172] (0xc000b0dad0) Reply frame received for 5 I0324 13:23:12.059608 6 log.go:172] (0xc000b0dad0) Data frame received for 3 I0324 13:23:12.059642 6 log.go:172] (0xc001610280) (3) Data frame handling I0324 13:23:12.059798 6 log.go:172] (0xc001610280) (3) Data frame sent I0324 13:23:12.060002 6 log.go:172] (0xc000b0dad0) Data frame received for 5 I0324 13:23:12.060030 6 log.go:172] (0xc000b72280) (5) Data frame handling I0324 13:23:12.060128 6 log.go:172] (0xc000b0dad0) Data frame received for 3 I0324 13:23:12.060147 6 log.go:172] (0xc001610280) (3) Data frame handling I0324 13:23:12.061781 6 log.go:172] (0xc000b0dad0) Data frame received for 1 I0324 13:23:12.061814 6 log.go:172] (0xc00149eaa0) (1) Data frame handling I0324 13:23:12.061837 6 log.go:172] (0xc00149eaa0) (1) Data frame sent I0324 13:23:12.061865 6 log.go:172] (0xc000b0dad0) (0xc00149eaa0) Stream removed, broadcasting: 1 I0324 13:23:12.061894 6 log.go:172] (0xc000b0dad0) Go away received I0324 13:23:12.062055 6 log.go:172] (0xc000b0dad0) (0xc00149eaa0) Stream removed, broadcasting: 1 I0324 13:23:12.062081 6 log.go:172] (0xc000b0dad0) (0xc001610280) Stream removed, broadcasting: 3 I0324 13:23:12.062095 6 log.go:172] (0xc000b0dad0) (0xc000b72280) Stream removed, broadcasting: 5 Mar 24 13:23:12.062: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:23:12.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5230" for this suite. Mar 24 13:23:34.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:23:34.160: INFO: namespace pod-network-test-5230 deletion completed in 22.093316693s • [SLOW TEST:48.522 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:23:34.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9813.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9813.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9813.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9813.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 13:23:40.286: INFO: DNS probes using dns-9813/dns-test-9a2e84d3-8a7a-45ee-97a9-657d31fda442 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:23:40.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9813" for this suite. Mar 24 13:23:46.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:23:46.433: INFO: namespace dns-9813 deletion completed in 6.113450876s • [SLOW TEST:12.273 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:23:46.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-0f7958e3-3caf-46b7-9202-5e3023355420 STEP: Creating a pod to test consume configMaps Mar 24 13:23:46.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b" in namespace "projected-7438" to be "success or failure" Mar 24 13:23:46.504: INFO: Pod "pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647273ms Mar 24 13:23:48.508: INFO: Pod "pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007345983s Mar 24 13:23:50.512: INFO: Pod "pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011034973s STEP: Saw pod success Mar 24 13:23:50.512: INFO: Pod "pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b" satisfied condition "success or failure" Mar 24 13:23:50.515: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:23:50.530: INFO: Waiting for pod pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b to disappear Mar 24 13:23:50.534: INFO: Pod pod-projected-configmaps-48308f83-7928-46f2-84be-18071092977b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:23:50.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7438" for this suite. Mar 24 13:23:56.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:23:56.650: INFO: namespace projected-7438 deletion completed in 6.112028691s • [SLOW TEST:10.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:23:56.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 13:23:59.761: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:23:59.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7956" for this suite. Mar 24 13:24:05.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:24:05.878: INFO: namespace container-runtime-7956 deletion completed in 6.07951209s • [SLOW TEST:9.227 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:24:05.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-l4c4 STEP: Creating a pod to test atomic-volume-subpath Mar 24 13:24:05.975: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l4c4" in namespace "subpath-4793" to be "success or failure" Mar 24 13:24:05.993: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.30569ms Mar 24 13:24:07.996: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021476939s Mar 24 13:24:10.000: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 4.025885016s Mar 24 13:24:12.005: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 6.030502408s Mar 24 13:24:14.009: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 8.034899609s Mar 24 13:24:16.014: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 10.039375115s Mar 24 13:24:18.019: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 12.044072245s Mar 24 13:24:20.023: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 14.048543666s Mar 24 13:24:22.028: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 16.053148602s Mar 24 13:24:24.032: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 18.057453657s Mar 24 13:24:26.036: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 20.061357761s Mar 24 13:24:28.040: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Running", Reason="", readiness=true. Elapsed: 22.065763107s Mar 24 13:24:30.045: INFO: Pod "pod-subpath-test-configmap-l4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070437539s STEP: Saw pod success Mar 24 13:24:30.045: INFO: Pod "pod-subpath-test-configmap-l4c4" satisfied condition "success or failure" Mar 24 13:24:30.048: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-l4c4 container test-container-subpath-configmap-l4c4: STEP: delete the pod Mar 24 13:24:30.084: INFO: Waiting for pod pod-subpath-test-configmap-l4c4 to disappear Mar 24 13:24:30.098: INFO: Pod pod-subpath-test-configmap-l4c4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-l4c4 Mar 24 13:24:30.098: INFO: Deleting pod "pod-subpath-test-configmap-l4c4" in namespace "subpath-4793" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:24:30.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4793" for this suite. Mar 24 13:24:36.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:24:36.193: INFO: namespace subpath-4793 deletion completed in 6.089988455s • [SLOW TEST:30.315 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:24:36.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 13:24:36.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7807' Mar 24 13:24:39.438: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 24 13:24:39.438: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 24 13:24:39.465: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-8m8pb] Mar 24 13:24:39.465: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-8m8pb" in namespace "kubectl-7807" to be "running and ready" Mar 24 13:24:39.481: INFO: Pod "e2e-test-nginx-rc-8m8pb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.33912ms Mar 24 13:24:41.485: INFO: Pod "e2e-test-nginx-rc-8m8pb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020356789s Mar 24 13:24:43.489: INFO: Pod "e2e-test-nginx-rc-8m8pb": Phase="Running", Reason="", readiness=true. Elapsed: 4.024440148s Mar 24 13:24:43.489: INFO: Pod "e2e-test-nginx-rc-8m8pb" satisfied condition "running and ready" Mar 24 13:24:43.490: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-8m8pb] Mar 24 13:24:43.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7807' Mar 24 13:24:43.610: INFO: stderr: "" Mar 24 13:24:43.610: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 24 13:24:43.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7807' Mar 24 13:24:43.725: INFO: stderr: "" Mar 24 13:24:43.725: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:24:43.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7807" for this suite. Mar 24 13:24:49.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:24:49.838: INFO: namespace kubectl-7807 deletion completed in 6.109717403s • [SLOW TEST:13.644 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:24:49.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 24 13:24:49.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:49.963: INFO: Number of nodes with available pods: 0 Mar 24 13:24:49.963: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:50.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:50.971: INFO: Number of nodes with available pods: 0 Mar 24 13:24:50.971: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:51.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:51.970: INFO: Number of nodes with available pods: 0 Mar 24 13:24:51.970: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:52.976: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:52.979: INFO: Number of nodes with available pods: 0 Mar 24 13:24:52.979: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:53.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:53.971: INFO: Number of nodes with available pods: 2 Mar 24 13:24:53.971: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 24 13:24:53.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:53.995: INFO: Number of nodes with available pods: 1 Mar 24 13:24:53.995: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:55.000: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:55.004: INFO: Number of nodes with available pods: 1 Mar 24 13:24:55.004: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:56.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:56.005: INFO: Number of nodes with available pods: 1 Mar 24 13:24:56.005: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:57.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:57.005: INFO: Number of nodes with available pods: 1 Mar 24 13:24:57.005: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:58.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:58.006: INFO: Number of nodes with available pods: 1 Mar 24 13:24:58.006: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:24:59.000: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:24:59.004: INFO: Number of nodes with available pods: 1 Mar 24 13:24:59.004: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:00.005: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:00.007: INFO: Number of nodes with available pods: 1 Mar 24 13:25:00.007: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:00.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:01.003: INFO: Number of nodes with available pods: 1 Mar 24 13:25:01.003: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:01.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:02.001: INFO: Number of nodes with available pods: 1 Mar 24 13:25:02.001: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:02.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:03.003: INFO: Number of nodes with available pods: 1 Mar 24 13:25:03.003: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:04.039: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:04.042: INFO: Number of nodes with available pods: 1 Mar 24 13:25:04.042: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:25:04.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 13:25:05.002: INFO: Number of nodes with available pods: 2 Mar 24 13:25:05.002: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1336, will wait for the garbage collector to delete the pods Mar 24 13:25:05.062: INFO: Deleting DaemonSet.extensions daemon-set took: 4.793467ms Mar 24 13:25:05.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.213958ms Mar 24 13:25:12.164: INFO: Number of nodes with available pods: 0 Mar 24 13:25:12.164: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 13:25:12.166: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1336/daemonsets","resourceVersion":"1597508"},"items":null} Mar 24 13:25:12.167: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1336/pods","resourceVersion":"1597508"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:25:12.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1336" for this suite. Mar 24 13:25:18.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:25:18.323: INFO: namespace daemonsets-1336 deletion completed in 6.14838517s • [SLOW TEST:28.485 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:25:18.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4ab32714-aa46-475f-8548-3f907c499375 STEP: Creating a pod to test consume configMaps Mar 24 13:25:18.402: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b" in namespace "projected-554" to be "success or failure" Mar 24 13:25:18.420: INFO: Pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.596745ms Mar 24 13:25:20.426: INFO: Pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02445046s Mar 24 13:25:22.430: INFO: Pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028340217s Mar 24 13:25:24.435: INFO: Pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033512555s STEP: Saw pod success Mar 24 13:25:24.435: INFO: Pod "pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b" satisfied condition "success or failure" Mar 24 13:25:24.438: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:25:24.456: INFO: Waiting for pod pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b to disappear Mar 24 13:25:24.477: INFO: Pod pod-projected-configmaps-f2f37027-327e-43c9-89d2-13fd24b57b7b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:25:24.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-554" for this suite. Mar 24 13:25:30.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:25:30.613: INFO: namespace projected-554 deletion completed in 6.132347163s • [SLOW TEST:12.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:25:30.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:25:30.648: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:25:34.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8555" for this suite. Mar 24 13:26:14.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:26:14.836: INFO: namespace pods-8555 deletion completed in 40.127625281s • [SLOW TEST:44.223 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:26:14.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 13:26:14.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7714' Mar 24 13:26:15.024: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 24 13:26:15.024: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 24 13:26:15.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7714' Mar 24 13:26:15.176: INFO: stderr: "" Mar 24 13:26:15.176: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:26:15.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7714" for this suite. Mar 24 13:26:21.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:26:21.316: INFO: namespace kubectl-7714 deletion completed in 6.136763167s • [SLOW TEST:6.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:26:21.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:26:21.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a" in namespace "projected-7225" to be "success or failure" Mar 24 13:26:21.427: INFO: Pod "downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540389ms Mar 24 13:26:23.436: INFO: Pod "downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015883594s Mar 24 13:26:25.440: INFO: Pod "downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020193822s STEP: Saw pod success Mar 24 13:26:25.441: INFO: Pod "downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a" satisfied condition "success or failure" Mar 24 13:26:25.444: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a container client-container: STEP: delete the pod Mar 24 13:26:25.474: INFO: Waiting for pod downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a to disappear Mar 24 13:26:25.486: INFO: Pod downwardapi-volume-4a788331-b72f-44b6-b9a2-806c03bfc95a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:26:25.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7225" for this suite. Mar 24 13:26:31.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:26:31.603: INFO: namespace projected-7225 deletion completed in 6.112641955s • [SLOW TEST:10.285 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:26:31.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-fcb7fa51-4849-4928-871d-8c6b81a0f41c STEP: Creating configMap with name cm-test-opt-upd-871e3bf2-6029-43d5-b101-fc73a7696bcf STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fcb7fa51-4849-4928-871d-8c6b81a0f41c STEP: Updating configmap cm-test-opt-upd-871e3bf2-6029-43d5-b101-fc73a7696bcf STEP: Creating configMap with name cm-test-opt-create-2447e976-9e5a-430e-a01f-b60d8ff64fd3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:27:48.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1455" for this suite. Mar 24 13:28:10.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:28:10.215: INFO: namespace projected-1455 deletion completed in 22.096682402s • [SLOW TEST:98.612 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:28:10.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a57de6eb-0c6e-4b56-810e-6554007f3869 STEP: Creating a pod to test consume configMaps Mar 24 13:28:10.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b" in namespace "configmap-2543" to be "success or failure" Mar 24 13:28:10.305: INFO: Pod "pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.072947ms Mar 24 13:28:12.309: INFO: Pod "pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013696147s Mar 24 13:28:14.314: INFO: Pod "pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018220017s STEP: Saw pod success Mar 24 13:28:14.314: INFO: Pod "pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b" satisfied condition "success or failure" Mar 24 13:28:14.317: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b container configmap-volume-test: STEP: delete the pod Mar 24 13:28:14.354: INFO: Waiting for pod pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b to disappear Mar 24 13:28:14.369: INFO: Pod pod-configmaps-fd6a762a-e685-4f53-89a6-aa5e6accb93b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:28:14.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2543" for this suite. Mar 24 13:28:20.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:28:20.467: INFO: namespace configmap-2543 deletion completed in 6.094696249s • [SLOW TEST:10.252 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:28:20.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2612/configmap-test-09b63176-6978-413f-8455-a05496362c08 STEP: Creating a pod to test consume configMaps Mar 24 13:28:20.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261" in namespace "configmap-2612" to be "success or failure" Mar 24 13:28:20.575: INFO: Pod "pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261": Phase="Pending", Reason="", readiness=false. Elapsed: 24.342779ms Mar 24 13:28:22.579: INFO: Pod "pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028493622s Mar 24 13:28:24.584: INFO: Pod "pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033016572s STEP: Saw pod success Mar 24 13:28:24.584: INFO: Pod "pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261" satisfied condition "success or failure" Mar 24 13:28:24.587: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261 container env-test: STEP: delete the pod Mar 24 13:28:24.605: INFO: Waiting for pod pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261 to disappear Mar 24 13:28:24.608: INFO: Pod pod-configmaps-342048a8-9b7b-4db8-8e33-f2ad17c3d261 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:28:24.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2612" for this suite. Mar 24 13:28:30.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:28:30.745: INFO: namespace configmap-2612 deletion completed in 6.133884476s • [SLOW TEST:10.278 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:28:30.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:28:30.789: INFO: Creating ReplicaSet my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0 Mar 24 13:28:30.824: INFO: Pod name my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0: Found 0 pods out of 1 Mar 24 13:28:35.828: INFO: Pod name my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0: Found 1 pods out of 1 Mar 24 13:28:35.828: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0" is running Mar 24 13:28:35.831: INFO: Pod "my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0-6986q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:28:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:28:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:28:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:28:30 +0000 UTC Reason: Message:}]) Mar 24 13:28:35.831: INFO: Trying to dial the pod Mar 24 13:28:40.843: INFO: Controller my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0: Got expected result from replica 1 [my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0-6986q]: "my-hostname-basic-b343c0dd-a3f6-4a7f-8235-0514ce7446e0-6986q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:28:40.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6524" for this suite. Mar 24 13:28:46.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:28:46.944: INFO: namespace replicaset-6524 deletion completed in 6.097066048s • [SLOW TEST:16.198 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:28:46.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 24 13:28:46.977: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 13:28:46.986: INFO: Waiting for terminating namespaces to be deleted... Mar 24 13:28:46.988: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 24 13:28:46.995: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:28:46.995: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:28:46.995: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:28:46.995: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:28:46.995: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 24 13:28:47.034: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 24 13:28:47.034: INFO: Container coredns ready: true, restart count 0 Mar 24 13:28:47.034: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 24 13:28:47.034: INFO: Container coredns ready: true, restart count 0 Mar 24 13:28:47.034: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 24 13:28:47.034: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:28:47.034: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 24 13:28:47.034: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 24 13:28:47.093: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 24 13:28:47.093: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 24 13:28:47.093: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 24 13:28:47.093: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 24 13:28:47.093: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 24 13:28:47.093: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa.15ff40836fc93186], Reason = [Scheduled], Message = [Successfully assigned sched-pred-511/filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa.15ff4083e0eff20b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa.15ff4084116fd235], Reason = [Created], Message = [Created container filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa.15ff40842708093a], Reason = [Started], Message = [Started container filler-pod-5b7965e2-1c44-4736-a52e-33b0eaa87baa] STEP: Considering event: Type = [Normal], Name = [filler-pod-fef00032-d904-469d-90b1-14807b101383.15ff40836fc8a101], Reason = [Scheduled], Message = [Successfully assigned sched-pred-511/filler-pod-fef00032-d904-469d-90b1-14807b101383 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-fef00032-d904-469d-90b1-14807b101383.15ff4083e589e453], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fef00032-d904-469d-90b1-14807b101383.15ff40841d596f29], Reason = [Created], Message = [Created container filler-pod-fef00032-d904-469d-90b1-14807b101383] STEP: Considering event: Type = [Normal], Name = [filler-pod-fef00032-d904-469d-90b1-14807b101383.15ff40842eec436a], Reason = [Started], Message = [Started container filler-pod-fef00032-d904-469d-90b1-14807b101383] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ff40845f4aefff], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:28:52.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-511" for this suite. Mar 24 13:28:58.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:28:58.331: INFO: namespace sched-pred-511 deletion completed in 6.104987565s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:28:58.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:28:58.405: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 24 13:28:58.412: INFO: Number of nodes with available pods: 0 Mar 24 13:28:58.412: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 24 13:28:58.491: INFO: Number of nodes with available pods: 0 Mar 24 13:28:58.491: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:28:59.495: INFO: Number of nodes with available pods: 0 Mar 24 13:28:59.496: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:00.495: INFO: Number of nodes with available pods: 0 Mar 24 13:29:00.495: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:01.496: INFO: Number of nodes with available pods: 0 Mar 24 13:29:01.496: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:02.500: INFO: Number of nodes with available pods: 1 Mar 24 13:29:02.501: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 24 13:29:02.531: INFO: Number of nodes with available pods: 1 Mar 24 13:29:02.531: INFO: Number of running nodes: 0, number of available pods: 1 Mar 24 13:29:03.539: INFO: Number of nodes with available pods: 0 Mar 24 13:29:03.539: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 24 13:29:03.551: INFO: Number of nodes with available pods: 0 Mar 24 13:29:03.551: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:04.577: INFO: Number of nodes with available pods: 0 Mar 24 13:29:04.577: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:05.555: INFO: Number of nodes with available pods: 0 Mar 24 13:29:05.555: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:06.555: INFO: Number of nodes with available pods: 0 Mar 24 13:29:06.555: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:07.556: INFO: Number of nodes with available pods: 0 Mar 24 13:29:07.556: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:08.556: INFO: Number of nodes with available pods: 0 Mar 24 13:29:08.556: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:09.556: INFO: Number of nodes with available pods: 0 Mar 24 13:29:09.556: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:10.556: INFO: Number of nodes with available pods: 0 Mar 24 13:29:10.556: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:11.555: INFO: Number of nodes with available pods: 0 Mar 24 13:29:11.555: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:12.555: INFO: Number of nodes with available pods: 0 Mar 24 13:29:12.555: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:13.558: INFO: Number of nodes with available pods: 0 Mar 24 13:29:13.558: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:14.570: INFO: Number of nodes with available pods: 0 Mar 24 13:29:14.570: INFO: Node iruya-worker is running more than one daemon pod Mar 24 13:29:15.570: INFO: Number of nodes with available pods: 1 Mar 24 13:29:15.570: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-24, will wait for the garbage collector to delete the pods Mar 24 13:29:15.635: INFO: Deleting DaemonSet.extensions daemon-set took: 6.468634ms Mar 24 13:29:15.936: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23893ms Mar 24 13:29:18.639: INFO: Number of nodes with available pods: 0 Mar 24 13:29:18.639: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 13:29:18.642: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-24/daemonsets","resourceVersion":"1598336"},"items":null} Mar 24 13:29:18.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-24/pods","resourceVersion":"1598336"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:29:18.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-24" for this suite. Mar 24 13:29:24.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:29:24.832: INFO: namespace daemonsets-24 deletion completed in 6.134817386s • [SLOW TEST:26.501 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:29:24.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 24 13:29:24.906: INFO: PodSpec: initContainers in spec.initContainers Mar 24 13:30:12.546: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-360ebd6e-35b2-44ef-9925-b4b66999ff0e", GenerateName:"", Namespace:"init-container-6266", SelfLink:"/api/v1/namespaces/init-container-6266/pods/pod-init-360ebd6e-35b2-44ef-9925-b4b66999ff0e", UID:"b9bb2052-7970-4c53-bdbe-b5b534b1c8f3", ResourceVersion:"1598492", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720653364, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"906961783"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jdx4f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00228b940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdx4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdx4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdx4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028284e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025f7ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002828570)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002828590)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002828598), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00282859c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653365, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653365, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653365, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653364, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.55", StartTime:(*v1.Time)(0xc00316a980), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002070700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002070770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8dd79bf243b25a32febf64809f12b867cb298fc98871702bdaae8db848a06f3f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00316a9c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00316a9a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:30:12.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6266" for this suite. Mar 24 13:30:34.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:30:34.679: INFO: namespace init-container-6266 deletion completed in 22.104713856s • [SLOW TEST:69.846 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:30:34.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 24 13:30:34.734: INFO: Waiting up to 5m0s for pod "pod-16fec35d-91a4-42b8-999e-c04129bf8fa5" in namespace "emptydir-4183" to be "success or failure" Mar 24 13:30:34.744: INFO: Pod "pod-16fec35d-91a4-42b8-999e-c04129bf8fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.348792ms Mar 24 13:30:36.747: INFO: Pod "pod-16fec35d-91a4-42b8-999e-c04129bf8fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012630292s Mar 24 13:30:38.751: INFO: Pod "pod-16fec35d-91a4-42b8-999e-c04129bf8fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01667012s STEP: Saw pod success Mar 24 13:30:38.751: INFO: Pod "pod-16fec35d-91a4-42b8-999e-c04129bf8fa5" satisfied condition "success or failure" Mar 24 13:30:38.755: INFO: Trying to get logs from node iruya-worker pod pod-16fec35d-91a4-42b8-999e-c04129bf8fa5 container test-container: STEP: delete the pod Mar 24 13:30:38.794: INFO: Waiting for pod pod-16fec35d-91a4-42b8-999e-c04129bf8fa5 to disappear Mar 24 13:30:38.816: INFO: Pod pod-16fec35d-91a4-42b8-999e-c04129bf8fa5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:30:38.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4183" for this suite. Mar 24 13:30:44.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:30:44.905: INFO: namespace emptydir-4183 deletion completed in 6.084573571s • [SLOW TEST:10.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:30:44.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:30:44.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad" in namespace "downward-api-2983" to be "success or failure" Mar 24 13:30:44.962: INFO: Pod "downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.056127ms Mar 24 13:30:46.966: INFO: Pod "downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016362576s Mar 24 13:30:48.970: INFO: Pod "downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019957966s STEP: Saw pod success Mar 24 13:30:48.970: INFO: Pod "downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad" satisfied condition "success or failure" Mar 24 13:30:48.974: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad container client-container: STEP: delete the pod Mar 24 13:30:49.011: INFO: Waiting for pod downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad to disappear Mar 24 13:30:49.038: INFO: Pod downwardapi-volume-a595447c-6d48-4d42-9a2f-b2152112caad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:30:49.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2983" for this suite. Mar 24 13:30:55.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:30:55.140: INFO: namespace downward-api-2983 deletion completed in 6.098791229s • [SLOW TEST:10.235 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:30:55.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ce4a18b1-49ba-412a-b187-2ede88ef2088 STEP: Creating a pod to test consume secrets Mar 24 13:30:55.214: INFO: Waiting up to 5m0s for pod "pod-secrets-1083045b-897d-4403-bb73-d399687bd934" in namespace "secrets-7417" to be "success or failure" Mar 24 13:30:55.236: INFO: Pod "pod-secrets-1083045b-897d-4403-bb73-d399687bd934": Phase="Pending", Reason="", readiness=false. Elapsed: 21.470033ms Mar 24 13:30:57.239: INFO: Pod "pod-secrets-1083045b-897d-4403-bb73-d399687bd934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024992877s Mar 24 13:30:59.244: INFO: Pod "pod-secrets-1083045b-897d-4403-bb73-d399687bd934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02936265s STEP: Saw pod success Mar 24 13:30:59.244: INFO: Pod "pod-secrets-1083045b-897d-4403-bb73-d399687bd934" satisfied condition "success or failure" Mar 24 13:30:59.248: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1083045b-897d-4403-bb73-d399687bd934 container secret-volume-test: STEP: delete the pod Mar 24 13:30:59.277: INFO: Waiting for pod pod-secrets-1083045b-897d-4403-bb73-d399687bd934 to disappear Mar 24 13:30:59.283: INFO: Pod pod-secrets-1083045b-897d-4403-bb73-d399687bd934 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:30:59.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7417" for this suite. Mar 24 13:31:05.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:05.384: INFO: namespace secrets-7417 deletion completed in 6.097845097s • [SLOW TEST:10.243 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:31:05.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-34b374fa-791b-49e0-a1e1-f06e9c34ec47 STEP: Creating a pod to test consume secrets Mar 24 13:31:05.471: INFO: Waiting up to 5m0s for pod "pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e" in namespace "secrets-3704" to be "success or failure" Mar 24 13:31:05.475: INFO: Pod "pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.639547ms Mar 24 13:31:07.479: INFO: Pod "pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007725495s Mar 24 13:31:09.483: INFO: Pod "pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011643461s STEP: Saw pod success Mar 24 13:31:09.483: INFO: Pod "pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e" satisfied condition "success or failure" Mar 24 13:31:09.485: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e container secret-volume-test: STEP: delete the pod Mar 24 13:31:09.500: INFO: Waiting for pod pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e to disappear Mar 24 13:31:09.505: INFO: Pod pod-secrets-f9b22663-030d-42cb-820d-1690a77c5f0e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:31:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3704" for this suite. Mar 24 13:31:15.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:15.602: INFO: namespace secrets-3704 deletion completed in 6.094440363s • [SLOW TEST:10.218 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:31:15.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:31:21.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8493" for this suite. Mar 24 13:31:27.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:27.948: INFO: namespace namespaces-8493 deletion completed in 6.091065429s STEP: Destroying namespace "nsdeletetest-3547" for this suite. Mar 24 13:31:27.950: INFO: Namespace nsdeletetest-3547 was already deleted STEP: Destroying namespace "nsdeletetest-5891" for this suite. Mar 24 13:31:33.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:34.040: INFO: namespace nsdeletetest-5891 deletion completed in 6.089721004s • [SLOW TEST:18.437 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:31:34.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 24 13:31:34.100: INFO: Waiting up to 5m0s for pod "pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003" in namespace "emptydir-3938" to be "success or failure" Mar 24 13:31:34.104: INFO: Pod "pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003": Phase="Pending", Reason="", readiness=false. Elapsed: 3.585343ms Mar 24 13:31:36.108: INFO: Pod "pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007856805s Mar 24 13:31:38.112: INFO: Pod "pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012081779s STEP: Saw pod success Mar 24 13:31:38.113: INFO: Pod "pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003" satisfied condition "success or failure" Mar 24 13:31:38.115: INFO: Trying to get logs from node iruya-worker2 pod pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003 container test-container: STEP: delete the pod Mar 24 13:31:38.136: INFO: Waiting for pod pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003 to disappear Mar 24 13:31:38.140: INFO: Pod pod-ee87f6b6-d535-4afc-9aab-bd82cec0d003 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:31:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3938" for this suite. Mar 24 13:31:44.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:44.259: INFO: namespace emptydir-3938 deletion completed in 6.116034727s • [SLOW TEST:10.219 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:31:44.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:31:48.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-966" for this suite. Mar 24 13:31:54.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:31:54.508: INFO: namespace emptydir-wrapper-966 deletion completed in 6.126361716s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:31:54.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 24 13:31:59.118: INFO: Successfully updated pod "pod-update-a6b82179-b6a3-4f2c-a96a-1cc516a693d0" STEP: verifying the updated pod is in kubernetes Mar 24 13:31:59.125: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:31:59.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8110" for this suite. Mar 24 13:32:21.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:32:21.240: INFO: namespace pods-8110 deletion completed in 22.112274975s • [SLOW TEST:26.731 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:32:21.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-h265 STEP: Creating a pod to test atomic-volume-subpath Mar 24 13:32:21.388: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-h265" in namespace "subpath-9757" to be "success or failure" Mar 24 13:32:21.393: INFO: Pod "pod-subpath-test-projected-h265": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036537ms Mar 24 13:32:23.398: INFO: Pod "pod-subpath-test-projected-h265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009018206s Mar 24 13:32:25.402: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 4.013205182s Mar 24 13:32:27.406: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 6.017235606s Mar 24 13:32:29.410: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 8.021369752s Mar 24 13:32:31.415: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 10.026021938s Mar 24 13:32:33.418: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 12.029783676s Mar 24 13:32:35.423: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 14.034184594s Mar 24 13:32:37.427: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 16.038508907s Mar 24 13:32:39.431: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 18.042508321s Mar 24 13:32:41.436: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 20.047057716s Mar 24 13:32:43.440: INFO: Pod "pod-subpath-test-projected-h265": Phase="Running", Reason="", readiness=true. Elapsed: 22.051599941s Mar 24 13:32:45.444: INFO: Pod "pod-subpath-test-projected-h265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055083588s STEP: Saw pod success Mar 24 13:32:45.444: INFO: Pod "pod-subpath-test-projected-h265" satisfied condition "success or failure" Mar 24 13:32:45.446: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-h265 container test-container-subpath-projected-h265: STEP: delete the pod Mar 24 13:32:45.466: INFO: Waiting for pod pod-subpath-test-projected-h265 to disappear Mar 24 13:32:45.470: INFO: Pod pod-subpath-test-projected-h265 no longer exists STEP: Deleting pod pod-subpath-test-projected-h265 Mar 24 13:32:45.471: INFO: Deleting pod "pod-subpath-test-projected-h265" in namespace "subpath-9757" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:32:45.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9757" for this suite. Mar 24 13:32:51.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:32:51.575: INFO: namespace subpath-9757 deletion completed in 6.099248373s • [SLOW TEST:30.334 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:32:51.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 24 13:32:55.662: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-cc30a2b9-f247-42a3-becc-b1e43c448b65,GenerateName:,Namespace:events-3603,SelfLink:/api/v1/namespaces/events-3603/pods/send-events-cc30a2b9-f247-42a3-becc-b1e43c448b65,UID:5e5192d5-47f7-40f9-b66f-8990a3df91a1,ResourceVersion:1599066,Generation:0,CreationTimestamp:2020-03-24 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 625324631,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d5t95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d5t95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-d5t95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032868e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003286900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:32:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:32:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:32:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:32:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.192,StartTime:2020-03-24 13:32:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-24 13:32:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9209569b2a0e684f54237faebf9a46e7a6f1e86481df35a4f29d2bbcb242131d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 24 13:32:57.666: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 24 13:32:59.671: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:32:59.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3603" for this suite. Mar 24 13:33:45.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:33:45.821: INFO: namespace events-3603 deletion completed in 46.111894839s • [SLOW TEST:54.245 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:33:45.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 24 13:33:45.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7979' Mar 24 13:33:46.232: INFO: stderr: "" Mar 24 13:33:46.232: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 13:33:46.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:33:46.698: INFO: stderr: "" Mar 24 13:33:46.698: INFO: stdout: "update-demo-nautilus-98tzr update-demo-nautilus-9cln9 " Mar 24 13:33:46.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:33:46.779: INFO: stderr: "" Mar 24 13:33:46.779: INFO: stdout: "" Mar 24 13:33:46.779: INFO: update-demo-nautilus-98tzr is created but not running Mar 24 13:33:51.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:33:51.881: INFO: stderr: "" Mar 24 13:33:51.881: INFO: stdout: "update-demo-nautilus-98tzr update-demo-nautilus-9cln9 " Mar 24 13:33:51.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:33:51.977: INFO: stderr: "" Mar 24 13:33:51.977: INFO: stdout: "true" Mar 24 13:33:51.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98tzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:33:52.060: INFO: stderr: "" Mar 24 13:33:52.060: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:33:52.060: INFO: validating pod update-demo-nautilus-98tzr Mar 24 13:33:52.064: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:33:52.064: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:33:52.064: INFO: update-demo-nautilus-98tzr is verified up and running Mar 24 13:33:52.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:33:52.151: INFO: stderr: "" Mar 24 13:33:52.151: INFO: stdout: "true" Mar 24 13:33:52.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:33:52.239: INFO: stderr: "" Mar 24 13:33:52.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:33:52.239: INFO: validating pod update-demo-nautilus-9cln9 Mar 24 13:33:52.244: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:33:52.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:33:52.244: INFO: update-demo-nautilus-9cln9 is verified up and running STEP: scaling down the replication controller Mar 24 13:33:52.247: INFO: scanned /root for discovery docs: Mar 24 13:33:52.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7979' Mar 24 13:33:53.364: INFO: stderr: "" Mar 24 13:33:53.364: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 13:33:53.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:33:53.460: INFO: stderr: "" Mar 24 13:33:53.460: INFO: stdout: "update-demo-nautilus-98tzr update-demo-nautilus-9cln9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 24 13:33:58.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:33:58.562: INFO: stderr: "" Mar 24 13:33:58.562: INFO: stdout: "update-demo-nautilus-98tzr update-demo-nautilus-9cln9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 24 13:34:03.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:34:03.661: INFO: stderr: "" Mar 24 13:34:03.661: INFO: stdout: "update-demo-nautilus-9cln9 " Mar 24 13:34:03.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:03.760: INFO: stderr: "" Mar 24 13:34:03.760: INFO: stdout: "true" Mar 24 13:34:03.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:03.854: INFO: stderr: "" Mar 24 13:34:03.854: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:34:03.855: INFO: validating pod update-demo-nautilus-9cln9 Mar 24 13:34:03.858: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:34:03.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:34:03.858: INFO: update-demo-nautilus-9cln9 is verified up and running STEP: scaling up the replication controller Mar 24 13:34:03.860: INFO: scanned /root for discovery docs: Mar 24 13:34:03.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7979' Mar 24 13:34:05.021: INFO: stderr: "" Mar 24 13:34:05.021: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 13:34:05.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:34:05.120: INFO: stderr: "" Mar 24 13:34:05.120: INFO: stdout: "update-demo-nautilus-9cln9 update-demo-nautilus-m7khf " Mar 24 13:34:05.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:05.207: INFO: stderr: "" Mar 24 13:34:05.207: INFO: stdout: "true" Mar 24 13:34:05.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:05.414: INFO: stderr: "" Mar 24 13:34:05.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:34:05.414: INFO: validating pod update-demo-nautilus-9cln9 Mar 24 13:34:05.455: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:34:05.456: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:34:05.456: INFO: update-demo-nautilus-9cln9 is verified up and running Mar 24 13:34:05.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7khf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:05.617: INFO: stderr: "" Mar 24 13:34:05.617: INFO: stdout: "" Mar 24 13:34:05.617: INFO: update-demo-nautilus-m7khf is created but not running Mar 24 13:34:10.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7979' Mar 24 13:34:10.724: INFO: stderr: "" Mar 24 13:34:10.724: INFO: stdout: "update-demo-nautilus-9cln9 update-demo-nautilus-m7khf " Mar 24 13:34:10.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:10.820: INFO: stderr: "" Mar 24 13:34:10.820: INFO: stdout: "true" Mar 24 13:34:10.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cln9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:10.921: INFO: stderr: "" Mar 24 13:34:10.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:34:10.921: INFO: validating pod update-demo-nautilus-9cln9 Mar 24 13:34:10.925: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:34:10.925: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:34:10.925: INFO: update-demo-nautilus-9cln9 is verified up and running Mar 24 13:34:10.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7khf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:11.015: INFO: stderr: "" Mar 24 13:34:11.015: INFO: stdout: "true" Mar 24 13:34:11.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7khf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7979' Mar 24 13:34:11.111: INFO: stderr: "" Mar 24 13:34:11.111: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 13:34:11.111: INFO: validating pod update-demo-nautilus-m7khf Mar 24 13:34:11.115: INFO: got data: { "image": "nautilus.jpg" } Mar 24 13:34:11.115: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 13:34:11.115: INFO: update-demo-nautilus-m7khf is verified up and running STEP: using delete to clean up resources Mar 24 13:34:11.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7979' Mar 24 13:34:11.261: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 13:34:11.261: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 24 13:34:11.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7979' Mar 24 13:34:11.353: INFO: stderr: "No resources found.\n" Mar 24 13:34:11.353: INFO: stdout: "" Mar 24 13:34:11.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7979 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 13:34:11.501: INFO: stderr: "" Mar 24 13:34:11.501: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:34:11.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7979" for this suite. Mar 24 13:34:33.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:34:33.602: INFO: namespace kubectl-7979 deletion completed in 22.097280147s • [SLOW TEST:47.781 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:34:33.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 24 13:34:33.696: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:34:33.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9887" for this suite. Mar 24 13:34:39.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:34:39.871: INFO: namespace kubectl-9887 deletion completed in 6.094711984s • [SLOW TEST:6.269 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:34:39.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 24 13:34:43.950: INFO: Pod pod-hostip-b45dba58-31a4-4514-82a8-821213ad9abb has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:34:43.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3513" for this suite. Mar 24 13:35:05.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:35:06.047: INFO: namespace pods-3513 deletion completed in 22.093039499s • [SLOW TEST:26.176 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:35:06.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:35:06.121: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 24 13:35:11.126: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 24 13:35:11.126: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 24 13:35:13.132: INFO: Creating deployment "test-rollover-deployment" Mar 24 13:35:13.145: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 24 13:35:15.152: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 24 13:35:15.174: INFO: Ensure that both replica sets have 1 created replica Mar 24 13:35:15.180: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 24 13:35:15.188: INFO: Updating deployment test-rollover-deployment Mar 24 13:35:15.188: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 24 13:35:17.228: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 24 13:35:17.234: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 24 13:35:17.241: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:17.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653715, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:19.250: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:19.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653719, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:21.252: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:21.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653719, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:23.250: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:23.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653719, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:25.249: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:25.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653719, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:27.251: INFO: all replica sets need to contain the pod-template-hash label Mar 24 13:35:27.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653719, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720653713, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 13:35:29.281: INFO: Mar 24 13:35:29.281: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 24 13:35:29.289: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7469,SelfLink:/apis/apps/v1/namespaces/deployment-7469/deployments/test-rollover-deployment,UID:87391cd9-2e89-4efc-b9a1-ba2bd05a892a,ResourceVersion:1599574,Generation:2,CreationTimestamp:2020-03-24 13:35:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-24 13:35:13 +0000 UTC 2020-03-24 13:35:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-24 13:35:29 +0000 UTC 2020-03-24 13:35:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 24 13:35:29.292: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7469,SelfLink:/apis/apps/v1/namespaces/deployment-7469/replicasets/test-rollover-deployment-854595fc44,UID:887aaf78-66f0-4dcc-b975-698a9bf6c623,ResourceVersion:1599562,Generation:2,CreationTimestamp:2020-03-24 13:35:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 87391cd9-2e89-4efc-b9a1-ba2bd05a892a 0xc002ecab07 0xc002ecab08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 24 13:35:29.292: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 24 13:35:29.293: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7469,SelfLink:/apis/apps/v1/namespaces/deployment-7469/replicasets/test-rollover-controller,UID:4f4ba6ae-2331-48d3-b0c0-034557c41847,ResourceVersion:1599572,Generation:2,CreationTimestamp:2020-03-24 13:35:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 87391cd9-2e89-4efc-b9a1-ba2bd05a892a 0xc002eca9ef 0xc002ecaa10}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 13:35:29.293: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7469,SelfLink:/apis/apps/v1/namespaces/deployment-7469/replicasets/test-rollover-deployment-9b8b997cf,UID:2815fb3d-c397-42b8-9200-b96ad05338d3,ResourceVersion:1599526,Generation:2,CreationTimestamp:2020-03-24 13:35:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 87391cd9-2e89-4efc-b9a1-ba2bd05a892a 0xc002ecabe0 0xc002ecabe1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 13:35:29.296: INFO: Pod "test-rollover-deployment-854595fc44-29jxn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-29jxn,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7469,SelfLink:/api/v1/namespaces/deployment-7469/pods/test-rollover-deployment-854595fc44-29jxn,UID:54803878-5494-4cbc-8ddd-0466ce5b5cbe,ResourceVersion:1599541,Generation:0,CreationTimestamp:2020-03-24 13:35:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 887aaf78-66f0-4dcc-b975-698a9bf6c623 0xc002ecb7a7 0xc002ecb7a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-77wcj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-77wcj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-77wcj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ecb820} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ecb840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:35:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:35:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:35:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.62,StartTime:2020-03-24 13:35:15 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-24 13:35:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://139e64945fe52fc3f2bd33a87d5ab6515c899413af8833fbea0af1661de8f97b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:35:29.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7469" for this suite. Mar 24 13:35:35.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:35:35.379: INFO: namespace deployment-7469 deletion completed in 6.079352341s • [SLOW TEST:29.331 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:35:35.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 24 13:35:35.425: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix250305429/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:35:35.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-623" for this suite. Mar 24 13:35:41.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:35:41.598: INFO: namespace kubectl-623 deletion completed in 6.095607761s • [SLOW TEST:6.219 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:35:41.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0324 13:35:42.741813 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:35:42.741: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:35:42.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9808" for this suite. Mar 24 13:35:48.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:35:48.840: INFO: namespace gc-9808 deletion completed in 6.095488928s • [SLOW TEST:7.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:35:48.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 13:35:48.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5381' Mar 24 13:35:51.559: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 24 13:35:51.559: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 24 13:35:51.568: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 24 13:35:51.582: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 24 13:35:51.590: INFO: scanned /root for discovery docs: Mar 24 13:35:51.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5381' Mar 24 13:36:07.442: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 24 13:36:07.442: INFO: stdout: "Created e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb\nScaling up e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 24 13:36:07.442: INFO: stdout: "Created e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb\nScaling up e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 24 13:36:07.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5381' Mar 24 13:36:07.535: INFO: stderr: "" Mar 24 13:36:07.535: INFO: stdout: "e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb-sb7mr " Mar 24 13:36:07.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb-sb7mr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5381' Mar 24 13:36:07.625: INFO: stderr: "" Mar 24 13:36:07.625: INFO: stdout: "true" Mar 24 13:36:07.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb-sb7mr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5381' Mar 24 13:36:07.716: INFO: stderr: "" Mar 24 13:36:07.716: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 24 13:36:07.716: INFO: e2e-test-nginx-rc-a3bacaac4b888ee6a1e8c04c1a386ceb-sb7mr is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 24 13:36:07.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5381' Mar 24 13:36:07.820: INFO: stderr: "" Mar 24 13:36:07.820: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:36:07.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5381" for this suite. Mar 24 13:36:13.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:36:13.961: INFO: namespace kubectl-5381 deletion completed in 6.096802143s • [SLOW TEST:25.121 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:36:13.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3867 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 24 13:36:14.066: INFO: Found 0 stateful pods, waiting for 3 Mar 24 13:36:24.070: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:36:24.070: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:36:24.070: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 24 13:36:24.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3867 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:36:24.359: INFO: stderr: "I0324 13:36:24.209449 2278 log.go:172] (0xc000132bb0) (0xc000898640) Create stream\nI0324 13:36:24.209504 2278 log.go:172] (0xc000132bb0) (0xc000898640) Stream added, broadcasting: 1\nI0324 13:36:24.212109 2278 log.go:172] (0xc000132bb0) Reply frame received for 1\nI0324 13:36:24.212158 2278 log.go:172] (0xc000132bb0) (0xc000632320) Create stream\nI0324 13:36:24.212172 2278 log.go:172] (0xc000132bb0) (0xc000632320) Stream added, broadcasting: 3\nI0324 13:36:24.213611 2278 log.go:172] (0xc000132bb0) Reply frame received for 3\nI0324 13:36:24.213655 2278 log.go:172] (0xc000132bb0) (0xc0001b0000) Create stream\nI0324 13:36:24.213669 2278 log.go:172] (0xc000132bb0) (0xc0001b0000) Stream added, broadcasting: 5\nI0324 13:36:24.214624 2278 log.go:172] (0xc000132bb0) Reply frame received for 5\nI0324 13:36:24.293917 2278 log.go:172] (0xc000132bb0) Data frame received for 5\nI0324 13:36:24.293941 2278 log.go:172] (0xc0001b0000) (5) Data frame handling\nI0324 13:36:24.293959 2278 log.go:172] (0xc0001b0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:36:24.353497 2278 log.go:172] (0xc000132bb0) Data frame received for 3\nI0324 13:36:24.353579 2278 log.go:172] (0xc000632320) (3) Data frame handling\nI0324 13:36:24.353597 2278 log.go:172] (0xc000632320) (3) Data frame sent\nI0324 13:36:24.353650 2278 log.go:172] (0xc000132bb0) Data frame received for 3\nI0324 13:36:24.353672 2278 log.go:172] (0xc000632320) (3) Data frame handling\nI0324 13:36:24.353721 2278 log.go:172] (0xc000132bb0) Data frame received for 5\nI0324 13:36:24.353760 2278 log.go:172] (0xc0001b0000) (5) Data frame handling\nI0324 13:36:24.355373 2278 log.go:172] (0xc000132bb0) Data frame received for 1\nI0324 13:36:24.355389 2278 log.go:172] (0xc000898640) (1) Data frame handling\nI0324 13:36:24.355401 2278 log.go:172] (0xc000898640) (1) Data frame sent\nI0324 13:36:24.355415 2278 log.go:172] (0xc000132bb0) (0xc000898640) Stream removed, broadcasting: 1\nI0324 13:36:24.355441 2278 log.go:172] (0xc000132bb0) Go away received\nI0324 13:36:24.355758 2278 log.go:172] (0xc000132bb0) (0xc000898640) Stream removed, broadcasting: 1\nI0324 13:36:24.355771 2278 log.go:172] (0xc000132bb0) (0xc000632320) Stream removed, broadcasting: 3\nI0324 13:36:24.355776 2278 log.go:172] (0xc000132bb0) (0xc0001b0000) Stream removed, broadcasting: 5\n" Mar 24 13:36:24.359: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:36:24.359: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 24 13:36:34.406: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 24 13:36:44.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3867 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:36:44.670: INFO: stderr: "I0324 13:36:44.581634 2300 log.go:172] (0xc000a3a370) (0xc000422820) Create stream\nI0324 13:36:44.581690 2300 log.go:172] (0xc000a3a370) (0xc000422820) Stream added, broadcasting: 1\nI0324 13:36:44.588776 2300 log.go:172] (0xc000a3a370) Reply frame received for 1\nI0324 13:36:44.588823 2300 log.go:172] (0xc000a3a370) (0xc0003403c0) Create stream\nI0324 13:36:44.588836 2300 log.go:172] (0xc000a3a370) (0xc0003403c0) Stream added, broadcasting: 3\nI0324 13:36:44.590767 2300 log.go:172] (0xc000a3a370) Reply frame received for 3\nI0324 13:36:44.590802 2300 log.go:172] (0xc000a3a370) (0xc000340460) Create stream\nI0324 13:36:44.590809 2300 log.go:172] (0xc000a3a370) (0xc000340460) Stream added, broadcasting: 5\nI0324 13:36:44.591541 2300 log.go:172] (0xc000a3a370) Reply frame received for 5\nI0324 13:36:44.665551 2300 log.go:172] (0xc000a3a370) Data frame received for 3\nI0324 13:36:44.665578 2300 log.go:172] (0xc0003403c0) (3) Data frame handling\nI0324 13:36:44.665598 2300 log.go:172] (0xc0003403c0) (3) Data frame sent\nI0324 13:36:44.665606 2300 log.go:172] (0xc000a3a370) Data frame received for 3\nI0324 13:36:44.665610 2300 log.go:172] (0xc0003403c0) (3) Data frame handling\nI0324 13:36:44.665706 2300 log.go:172] (0xc000a3a370) Data frame received for 5\nI0324 13:36:44.665733 2300 log.go:172] (0xc000340460) (5) Data frame handling\nI0324 13:36:44.665755 2300 log.go:172] (0xc000340460) (5) Data frame sent\nI0324 13:36:44.665775 2300 log.go:172] (0xc000a3a370) Data frame received for 5\nI0324 13:36:44.665805 2300 log.go:172] (0xc000340460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 13:36:44.666815 2300 log.go:172] (0xc000a3a370) Data frame received for 1\nI0324 13:36:44.666826 2300 log.go:172] (0xc000422820) (1) Data frame handling\nI0324 13:36:44.666832 2300 log.go:172] (0xc000422820) (1) Data frame sent\nI0324 13:36:44.666948 2300 log.go:172] (0xc000a3a370) (0xc000422820) Stream removed, broadcasting: 1\nI0324 13:36:44.666974 2300 log.go:172] (0xc000a3a370) Go away received\nI0324 13:36:44.667245 2300 log.go:172] (0xc000a3a370) (0xc000422820) Stream removed, broadcasting: 1\nI0324 13:36:44.667258 2300 log.go:172] (0xc000a3a370) (0xc0003403c0) Stream removed, broadcasting: 3\nI0324 13:36:44.667263 2300 log.go:172] (0xc000a3a370) (0xc000340460) Stream removed, broadcasting: 5\n" Mar 24 13:36:44.670: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 13:36:44.670: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 13:36:54.690: INFO: Waiting for StatefulSet statefulset-3867/ss2 to complete update Mar 24 13:36:54.690: INFO: Waiting for Pod statefulset-3867/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 24 13:36:54.690: INFO: Waiting for Pod statefulset-3867/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 24 13:37:04.698: INFO: Waiting for StatefulSet statefulset-3867/ss2 to complete update STEP: Rolling back to a previous revision Mar 24 13:37:14.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3867 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 13:37:14.948: INFO: stderr: "I0324 13:37:14.822286 2321 log.go:172] (0xc000a82420) (0xc00036c6e0) Create stream\nI0324 13:37:14.822346 2321 log.go:172] (0xc000a82420) (0xc00036c6e0) Stream added, broadcasting: 1\nI0324 13:37:14.825290 2321 log.go:172] (0xc000a82420) Reply frame received for 1\nI0324 13:37:14.825349 2321 log.go:172] (0xc000a82420) (0xc0003dc280) Create stream\nI0324 13:37:14.825364 2321 log.go:172] (0xc000a82420) (0xc0003dc280) Stream added, broadcasting: 3\nI0324 13:37:14.826354 2321 log.go:172] (0xc000a82420) Reply frame received for 3\nI0324 13:37:14.826398 2321 log.go:172] (0xc000a82420) (0xc00036c000) Create stream\nI0324 13:37:14.826411 2321 log.go:172] (0xc000a82420) (0xc00036c000) Stream added, broadcasting: 5\nI0324 13:37:14.827118 2321 log.go:172] (0xc000a82420) Reply frame received for 5\nI0324 13:37:14.908486 2321 log.go:172] (0xc000a82420) Data frame received for 5\nI0324 13:37:14.908518 2321 log.go:172] (0xc00036c000) (5) Data frame handling\nI0324 13:37:14.908534 2321 log.go:172] (0xc00036c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 13:37:14.942296 2321 log.go:172] (0xc000a82420) Data frame received for 5\nI0324 13:37:14.942336 2321 log.go:172] (0xc00036c000) (5) Data frame handling\nI0324 13:37:14.942368 2321 log.go:172] (0xc000a82420) Data frame received for 3\nI0324 13:37:14.942391 2321 log.go:172] (0xc0003dc280) (3) Data frame handling\nI0324 13:37:14.942413 2321 log.go:172] (0xc0003dc280) (3) Data frame sent\nI0324 13:37:14.942453 2321 log.go:172] (0xc000a82420) Data frame received for 3\nI0324 13:37:14.942472 2321 log.go:172] (0xc0003dc280) (3) Data frame handling\nI0324 13:37:14.944577 2321 log.go:172] (0xc000a82420) Data frame received for 1\nI0324 13:37:14.944606 2321 log.go:172] (0xc00036c6e0) (1) Data frame handling\nI0324 13:37:14.944624 2321 log.go:172] (0xc00036c6e0) (1) Data frame sent\nI0324 13:37:14.944647 2321 log.go:172] (0xc000a82420) (0xc00036c6e0) Stream removed, broadcasting: 1\nI0324 13:37:14.944676 2321 log.go:172] (0xc000a82420) Go away received\nI0324 13:37:14.944994 2321 log.go:172] (0xc000a82420) (0xc00036c6e0) Stream removed, broadcasting: 1\nI0324 13:37:14.945011 2321 log.go:172] (0xc000a82420) (0xc0003dc280) Stream removed, broadcasting: 3\nI0324 13:37:14.945022 2321 log.go:172] (0xc000a82420) (0xc00036c000) Stream removed, broadcasting: 5\n" Mar 24 13:37:14.949: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 13:37:14.949: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 13:37:24.979: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 24 13:37:35.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3867 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 13:37:35.226: INFO: stderr: "I0324 13:37:35.144481 2343 log.go:172] (0xc00094a6e0) (0xc000222be0) Create stream\nI0324 13:37:35.144530 2343 log.go:172] (0xc00094a6e0) (0xc000222be0) Stream added, broadcasting: 1\nI0324 13:37:35.150168 2343 log.go:172] (0xc00094a6e0) Reply frame received for 1\nI0324 13:37:35.150269 2343 log.go:172] (0xc00094a6e0) (0xc000222320) Create stream\nI0324 13:37:35.150293 2343 log.go:172] (0xc00094a6e0) (0xc000222320) Stream added, broadcasting: 3\nI0324 13:37:35.151780 2343 log.go:172] (0xc00094a6e0) Reply frame received for 3\nI0324 13:37:35.151848 2343 log.go:172] (0xc00094a6e0) (0xc00009c000) Create stream\nI0324 13:37:35.151911 2343 log.go:172] (0xc00094a6e0) (0xc00009c000) Stream added, broadcasting: 5\nI0324 13:37:35.153448 2343 log.go:172] (0xc00094a6e0) Reply frame received for 5\nI0324 13:37:35.220241 2343 log.go:172] (0xc00094a6e0) Data frame received for 3\nI0324 13:37:35.220287 2343 log.go:172] (0xc00094a6e0) Data frame received for 5\nI0324 13:37:35.220327 2343 log.go:172] (0xc00009c000) (5) Data frame handling\nI0324 13:37:35.220355 2343 log.go:172] (0xc00009c000) (5) Data frame sent\nI0324 13:37:35.220373 2343 log.go:172] (0xc00094a6e0) Data frame received for 5\nI0324 13:37:35.220387 2343 log.go:172] (0xc00009c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 13:37:35.220402 2343 log.go:172] (0xc000222320) (3) Data frame handling\nI0324 13:37:35.220415 2343 log.go:172] (0xc000222320) (3) Data frame sent\nI0324 13:37:35.220428 2343 log.go:172] (0xc00094a6e0) Data frame received for 3\nI0324 13:37:35.220440 2343 log.go:172] (0xc000222320) (3) Data frame handling\nI0324 13:37:35.221848 2343 log.go:172] (0xc00094a6e0) Data frame received for 1\nI0324 13:37:35.221881 2343 log.go:172] (0xc000222be0) (1) Data frame handling\nI0324 13:37:35.221906 2343 log.go:172] (0xc000222be0) (1) Data frame sent\nI0324 13:37:35.221927 2343 log.go:172] (0xc00094a6e0) (0xc000222be0) Stream removed, broadcasting: 1\nI0324 13:37:35.221951 2343 log.go:172] (0xc00094a6e0) Go away received\nI0324 13:37:35.222390 2343 log.go:172] (0xc00094a6e0) (0xc000222be0) Stream removed, broadcasting: 1\nI0324 13:37:35.222426 2343 log.go:172] (0xc00094a6e0) (0xc000222320) Stream removed, broadcasting: 3\nI0324 13:37:35.222451 2343 log.go:172] (0xc00094a6e0) (0xc00009c000) Stream removed, broadcasting: 5\n" Mar 24 13:37:35.226: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 13:37:35.226: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 24 13:37:55.265: INFO: Deleting all statefulset in ns statefulset-3867 Mar 24 13:37:55.266: INFO: Scaling statefulset ss2 to 0 Mar 24 13:38:15.325: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:38:15.328: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:38:15.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3867" for this suite. Mar 24 13:38:21.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:38:21.474: INFO: namespace statefulset-3867 deletion completed in 6.122232293s • [SLOW TEST:127.513 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:38:21.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:38:21.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3" in namespace "projected-5281" to be "success or failure" Mar 24 13:38:21.567: INFO: Pod "downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980338ms Mar 24 13:38:23.570: INFO: Pod "downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007446693s Mar 24 13:38:25.574: INFO: Pod "downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011833899s STEP: Saw pod success Mar 24 13:38:25.574: INFO: Pod "downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3" satisfied condition "success or failure" Mar 24 13:38:25.578: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3 container client-container: STEP: delete the pod Mar 24 13:38:25.615: INFO: Waiting for pod downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3 to disappear Mar 24 13:38:25.638: INFO: Pod downwardapi-volume-a3f70b53-7094-4e1c-bd8d-1d1d4758a4e3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:38:25.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5281" for this suite. Mar 24 13:38:31.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:38:31.743: INFO: namespace projected-5281 deletion completed in 6.102618005s • [SLOW TEST:10.269 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:38:31.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 13:38:31.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9817' Mar 24 13:38:31.903: INFO: stderr: "" Mar 24 13:38:31.903: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 24 13:38:31.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9817' Mar 24 13:38:41.863: INFO: stderr: "" Mar 24 13:38:41.863: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:38:41.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9817" for this suite. Mar 24 13:38:47.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:38:47.982: INFO: namespace kubectl-9817 deletion completed in 6.116043251s • [SLOW TEST:16.238 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:38:47.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-fa4131bf-18d9-42c4-b858-e353cdde58d1 STEP: Creating a pod to test consume configMaps Mar 24 13:38:48.102: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147" in namespace "projected-9662" to be "success or failure" Mar 24 13:38:48.106: INFO: Pod "pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131782ms Mar 24 13:38:50.147: INFO: Pod "pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045236535s Mar 24 13:38:52.151: INFO: Pod "pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04947518s STEP: Saw pod success Mar 24 13:38:52.151: INFO: Pod "pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147" satisfied condition "success or failure" Mar 24 13:38:52.154: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147 container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:38:52.174: INFO: Waiting for pod pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147 to disappear Mar 24 13:38:52.194: INFO: Pod pod-projected-configmaps-4b1d6054-9d5e-4f48-807f-3c282d046147 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:38:52.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9662" for this suite. Mar 24 13:38:58.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:38:58.296: INFO: namespace projected-9662 deletion completed in 6.098243285s • [SLOW TEST:10.314 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:38:58.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 24 13:38:58.360: INFO: Waiting up to 5m0s for pod "client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d" in namespace "containers-8834" to be "success or failure" Mar 24 13:38:58.363: INFO: Pod "client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731453ms Mar 24 13:39:00.374: INFO: Pod "client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014721304s Mar 24 13:39:02.379: INFO: Pod "client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019235419s STEP: Saw pod success Mar 24 13:39:02.379: INFO: Pod "client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d" satisfied condition "success or failure" Mar 24 13:39:02.382: INFO: Trying to get logs from node iruya-worker pod client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d container test-container: STEP: delete the pod Mar 24 13:39:02.401: INFO: Waiting for pod client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d to disappear Mar 24 13:39:02.405: INFO: Pod client-containers-d594d41d-112c-40c6-9ed5-6cd6cba8c39d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:39:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8834" for this suite. Mar 24 13:39:08.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:39:08.576: INFO: namespace containers-8834 deletion completed in 6.16744374s • [SLOW TEST:10.280 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:39:08.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 24 13:39:08.655: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:39:15.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9988" for this suite. Mar 24 13:39:37.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:39:37.787: INFO: namespace init-container-9988 deletion completed in 22.098572571s • [SLOW TEST:29.210 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:39:37.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4eb9f598-05c6-457f-b252-543464bbd8a1 STEP: Creating a pod to test consume secrets Mar 24 13:39:37.929: INFO: Waiting up to 5m0s for pod "pod-secrets-df74904d-7272-44e0-8134-c7129342b52e" in namespace "secrets-3570" to be "success or failure" Mar 24 13:39:37.933: INFO: Pod "pod-secrets-df74904d-7272-44e0-8134-c7129342b52e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541243ms Mar 24 13:39:39.937: INFO: Pod "pod-secrets-df74904d-7272-44e0-8134-c7129342b52e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008067792s Mar 24 13:39:41.941: INFO: Pod "pod-secrets-df74904d-7272-44e0-8134-c7129342b52e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012041974s STEP: Saw pod success Mar 24 13:39:41.941: INFO: Pod "pod-secrets-df74904d-7272-44e0-8134-c7129342b52e" satisfied condition "success or failure" Mar 24 13:39:41.945: INFO: Trying to get logs from node iruya-worker pod pod-secrets-df74904d-7272-44e0-8134-c7129342b52e container secret-volume-test: STEP: delete the pod Mar 24 13:39:42.001: INFO: Waiting for pod pod-secrets-df74904d-7272-44e0-8134-c7129342b52e to disappear Mar 24 13:39:42.100: INFO: Pod pod-secrets-df74904d-7272-44e0-8134-c7129342b52e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:39:42.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3570" for this suite. Mar 24 13:39:48.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:39:48.197: INFO: namespace secrets-3570 deletion completed in 6.093759317s STEP: Destroying namespace "secret-namespace-7540" for this suite. Mar 24 13:39:54.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:39:54.285: INFO: namespace secret-namespace-7540 deletion completed in 6.087989435s • [SLOW TEST:16.498 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:39:54.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-845ba627-6e37-4d40-9acf-eeb030ca7002 in namespace container-probe-4871 Mar 24 13:39:58.358: INFO: Started pod busybox-845ba627-6e37-4d40-9acf-eeb030ca7002 in namespace container-probe-4871 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 13:39:58.361: INFO: Initial restart count of pod busybox-845ba627-6e37-4d40-9acf-eeb030ca7002 is 0 Mar 24 13:40:52.519: INFO: Restart count of pod container-probe-4871/busybox-845ba627-6e37-4d40-9acf-eeb030ca7002 is now 1 (54.158683458s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:40:52.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4871" for this suite. Mar 24 13:40:58.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:40:58.661: INFO: namespace container-probe-4871 deletion completed in 6.109588487s • [SLOW TEST:64.376 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:40:58.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8 Mar 24 13:40:58.746: INFO: Pod name my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8: Found 0 pods out of 1 Mar 24 13:41:03.751: INFO: Pod name my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8: Found 1 pods out of 1 Mar 24 13:41:03.751: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8" are running Mar 24 13:41:03.754: INFO: Pod "my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8-54wpq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:40:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:41:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:41:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-24 13:40:58 +0000 UTC Reason: Message:}]) Mar 24 13:41:03.754: INFO: Trying to dial the pod Mar 24 13:41:08.766: INFO: Controller my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8: Got expected result from replica 1 [my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8-54wpq]: "my-hostname-basic-d43c8108-8680-4242-95b9-96dd843330b8-54wpq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:41:08.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8328" for this suite. Mar 24 13:41:14.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:41:14.856: INFO: namespace replication-controller-8328 deletion completed in 6.085729848s • [SLOW TEST:16.194 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:41:14.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-19debfd0-66d9-4663-a9b8-f3044d62b43c STEP: Creating a pod to test consume configMaps Mar 24 13:41:14.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6" in namespace "configmap-7303" to be "success or failure" Mar 24 13:41:14.950: INFO: Pod "pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153106ms Mar 24 13:41:16.954: INFO: Pod "pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007876765s Mar 24 13:41:18.958: INFO: Pod "pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011754047s STEP: Saw pod success Mar 24 13:41:18.958: INFO: Pod "pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6" satisfied condition "success or failure" Mar 24 13:41:18.960: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6 container configmap-volume-test: STEP: delete the pod Mar 24 13:41:18.983: INFO: Waiting for pod pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6 to disappear Mar 24 13:41:18.988: INFO: Pod pod-configmaps-06db8e6f-184e-4acf-a3b8-7064c81a36c6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:41:18.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7303" for this suite. Mar 24 13:41:24.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:41:25.078: INFO: namespace configmap-7303 deletion completed in 6.087668482s • [SLOW TEST:10.222 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:41:25.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:41:58.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5396" for this suite. Mar 24 13:42:04.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:42:04.654: INFO: namespace container-runtime-5396 deletion completed in 6.107937457s • [SLOW TEST:39.575 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:42:04.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3620 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3620 to expose endpoints map[] Mar 24 13:42:04.772: INFO: successfully validated that service endpoint-test2 in namespace services-3620 exposes endpoints map[] (17.879003ms elapsed) STEP: Creating pod pod1 in namespace services-3620 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3620 to expose endpoints map[pod1:[80]] Mar 24 13:42:08.973: INFO: successfully validated that service endpoint-test2 in namespace services-3620 exposes endpoints map[pod1:[80]] (4.194682774s elapsed) STEP: Creating pod pod2 in namespace services-3620 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3620 to expose endpoints map[pod1:[80] pod2:[80]] Mar 24 13:42:12.055: INFO: successfully validated that service endpoint-test2 in namespace services-3620 exposes endpoints map[pod1:[80] pod2:[80]] (3.079546248s elapsed) STEP: Deleting pod pod1 in namespace services-3620 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3620 to expose endpoints map[pod2:[80]] Mar 24 13:42:13.103: INFO: successfully validated that service endpoint-test2 in namespace services-3620 exposes endpoints map[pod2:[80]] (1.042097238s elapsed) STEP: Deleting pod pod2 in namespace services-3620 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3620 to expose endpoints map[] Mar 24 13:42:14.116: INFO: successfully validated that service endpoint-test2 in namespace services-3620 exposes endpoints map[] (1.00877096s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:42:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3620" for this suite. Mar 24 13:42:36.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:42:36.410: INFO: namespace services-3620 deletion completed in 22.084274954s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.755 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:42:36.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0324 13:42:47.703100 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:42:47.703: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:42:47.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9667" for this suite. Mar 24 13:42:53.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:42:53.799: INFO: namespace gc-9667 deletion completed in 6.092431212s • [SLOW TEST:17.389 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:42:53.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4ad45bcc-fb2c-47ef-acb6-8c7d3682709e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:43:00.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7052" for this suite. Mar 24 13:43:22.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:43:22.192: INFO: namespace configmap-7052 deletion completed in 22.096221009s • [SLOW TEST:28.392 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:43:22.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 24 13:43:22.238: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:43:29.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7371" for this suite. Mar 24 13:43:35.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:43:35.356: INFO: namespace init-container-7371 deletion completed in 6.07924843s • [SLOW TEST:13.164 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:43:35.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 24 13:43:35.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 24 13:43:35.496: INFO: stderr: "" Mar 24 13:43:35.496: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:43:35.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5268" for this suite. Mar 24 13:43:41.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:43:41.616: INFO: namespace kubectl-5268 deletion completed in 6.116225611s • [SLOW TEST:6.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:43:41.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9561 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9561 STEP: Creating statefulset with conflicting port in namespace statefulset-9561 STEP: Waiting until pod test-pod will start running in namespace statefulset-9561 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9561 Mar 24 13:43:45.745: INFO: Observed stateful pod in namespace: statefulset-9561, name: ss-0, uid: 719b6b85-7ea1-4ac4-9153-1923ef58b579, status phase: Pending. Waiting for statefulset controller to delete. Mar 24 13:48:45.745: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 24 13:48:45.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-9561' Mar 24 13:48:48.477: INFO: stderr: "" Mar 24 13:48:48.477: INFO: stdout: "Name: ss-0\nNamespace: statefulset-9561\nPriority: 0\nNode: iruya-worker/\nLabels: baz=blah\n controller-revision-hash=ss-5867494796\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9gt5d (ro)\nVolumes:\n default-token-9gt5d:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9gt5d\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m5s kubelet, iruya-worker Predicate PodFitsHostPorts failed\n" Mar 24 13:48:48.477: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-9561 Priority: 0 Node: iruya-worker/ Labels: baz=blah controller-revision-hash=ss-5867494796 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-9gt5d (ro) Volumes: default-token-9gt5d: Type: Secret (a volume populated by a Secret) SecretName: default-token-9gt5d Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m5s kubelet, iruya-worker Predicate PodFitsHostPorts failed Mar 24 13:48:48.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-9561 --tail=100' Mar 24 13:48:48.584: INFO: rc: 1 Mar 24 13:48:48.584: INFO: Last 100 log lines of ss-0: Mar 24 13:48:48.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-9561' Mar 24 13:48:48.697: INFO: stderr: "" Mar 24 13:48:48.697: INFO: stdout: "Name: test-pod\nNamespace: statefulset-9561\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 24 Mar 2020 13:43:41 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.244.2.218\nContainers:\n nginx:\n Container ID: containerd://aeb873e07dd7e29a3f2b69b472d1151c12b1e5471d7324489aff74c19ec1ed90\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Tue, 24 Mar 2020 13:43:44 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9gt5d (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9gt5d:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9gt5d\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m5s kubelet, iruya-worker Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m4s kubelet, iruya-worker Created container nginx\n Normal Started 5m4s kubelet, iruya-worker Started container nginx\n" Mar 24 13:48:48.698: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-9561 Priority: 0 Node: iruya-worker/172.17.0.6 Start Time: Tue, 24 Mar 2020 13:43:41 +0000 Labels: Annotations: Status: Running IP: 10.244.2.218 Containers: nginx: Container ID: containerd://aeb873e07dd7e29a3f2b69b472d1151c12b1e5471d7324489aff74c19ec1ed90 Image: docker.io/library/nginx:1.14-alpine Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Tue, 24 Mar 2020 13:43:44 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-9gt5d (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-9gt5d: Type: Secret (a volume populated by a Secret) SecretName: default-token-9gt5d Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m5s kubelet, iruya-worker Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m4s kubelet, iruya-worker Created container nginx Normal Started 5m4s kubelet, iruya-worker Started container nginx Mar 24 13:48:48.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-9561 --tail=100' Mar 24 13:48:48.803: INFO: stderr: "" Mar 24 13:48:48.803: INFO: stdout: "" Mar 24 13:48:48.803: INFO: Last 100 log lines of test-pod: Mar 24 13:48:48.803: INFO: Deleting all statefulset in ns statefulset-9561 Mar 24 13:48:48.806: INFO: Scaling statefulset ss to 0 Mar 24 13:48:58.831: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 13:48:58.834: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-9561". STEP: Found 13 events. Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:41 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:41 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:41 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-9561/ss is recreating failed Pod ss-0 Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:41 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:41 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:42 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:42 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:43 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:43 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:43 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:43 +0000 UTC - event for test-pod: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:44 +0000 UTC - event for test-pod: {kubelet iruya-worker} Created: Created container nginx Mar 24 13:48:58.851: INFO: At 2020-03-24 13:43:44 +0000 UTC - event for test-pod: {kubelet iruya-worker} Started: Started container nginx Mar 24 13:48:58.853: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 13:48:58.853: INFO: test-pod iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:43:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:43:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:43:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 13:43:41 +0000 UTC }] Mar 24 13:48:58.853: INFO: Mar 24 13:48:58.877: INFO: Logging node info for node iruya-control-plane Mar 24 13:48:58.880: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:5b69a0f9-55ac-48be-a8d0-5e04b939b798,ResourceVersion:1602329,Generation:0,CreationTimestamp:2020-03-15 18:24:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-03-24 13:48:31 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-24 13:48:31 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-24 13:48:31 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-24 13:48:31 +0000 UTC 2020-03-15 18:25:00 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f14f6f4d1640fcaab2243401c9f154,SystemUUID:7c6ca533-492e-400c-b058-c282f97a69ec,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Mar 24 13:48:58.880: INFO: Logging kubelet events for node iruya-control-plane Mar 24 13:48:58.882: INFO: Logging pods the kubelet thinks is on node iruya-control-plane Mar 24 13:48:58.905: INFO: kube-apiserver-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container kube-apiserver ready: true, restart count 0 Mar 24 13:48:58.905: INFO: kube-controller-manager-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 24 13:48:58.905: INFO: kube-scheduler-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container kube-scheduler ready: true, restart count 0 Mar 24 13:48:58.905: INFO: etcd-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container etcd ready: true, restart count 0 Mar 24 13:48:58.905: INFO: kindnet-zn8sx started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:48:58.905: INFO: kube-proxy-46nsr started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:48:58.905: INFO: local-path-provisioner-d4947b89c-72frh started at 2020-03-15 18:25:04 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.905: INFO: Container local-path-provisioner ready: true, restart count 0 W0324 13:48:58.908042 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:48:58.977: INFO: Latency metrics for node iruya-control-plane Mar 24 13:48:58.977: INFO: Logging node info for node iruya-worker Mar 24 13:48:58.981: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:94e58020-6063-4274-b0bd-d7c4f772701c,ResourceVersion:1602344,Generation:0,CreationTimestamp:2020-03-15 18:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-03-24 13:48:38 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-24 13:48:38 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-24 13:48:38 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-24 13:48:38 +0000 UTC 2020-03-15 18:25:15 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.6} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5332b21b7d0c4f35b2434f4fc8bea1cf,SystemUUID:92e1ff09-3c3c-490b-b499-0de27dc489ae,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Mar 24 13:48:58.981: INFO: Logging kubelet events for node iruya-worker Mar 24 13:48:58.983: INFO: Logging pods the kubelet thinks is on node iruya-worker Mar 24 13:48:58.989: INFO: kindnet-gwz5g started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.989: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:48:58.989: INFO: test-pod started at 2020-03-24 13:43:41 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.989: INFO: Container nginx ready: true, restart count 0 Mar 24 13:48:58.989: INFO: kube-proxy-pmz4p started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:58.989: INFO: Container kube-proxy ready: true, restart count 0 W0324 13:48:58.993236 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:48:59.049: INFO: Latency metrics for node iruya-worker Mar 24 13:48:59.049: INFO: Logging node info for node iruya-worker2 Mar 24 13:48:59.052: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:67dfdf76-d64a-45cb-a2a9-755b73c85644,ResourceVersion:1602304,Generation:0,CreationTimestamp:2020-03-15 18:24:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-03-24 13:48:17 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-24 13:48:17 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-24 13:48:17 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-24 13:48:17 +0000 UTC 2020-03-15 18:24:52 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.5} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5fda03f0d02548b7a74f8a4b6cc8795b,SystemUUID:d8b7a3a5-76b4-4c0b-85d7-cdb97f2c8b1a,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Mar 24 13:48:59.053: INFO: Logging kubelet events for node iruya-worker2 Mar 24 13:48:59.056: INFO: Logging pods the kubelet thinks is on node iruya-worker2 Mar 24 13:48:59.063: INFO: coredns-5d4dd4b4db-gm7vr started at 2020-03-15 18:24:52 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:59.063: INFO: Container coredns ready: true, restart count 0 Mar 24 13:48:59.063: INFO: coredns-5d4dd4b4db-6jcgz started at 2020-03-15 18:24:54 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:59.063: INFO: Container coredns ready: true, restart count 0 Mar 24 13:48:59.063: INFO: kube-proxy-vwbcj started at 2020-03-15 18:24:42 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:59.063: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:48:59.063: INFO: kindnet-mgd8b started at 2020-03-15 18:24:43 +0000 UTC (0+1 container statuses recorded) Mar 24 13:48:59.063: INFO: Container kindnet-cni ready: true, restart count 0 W0324 13:48:59.066455 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 13:48:59.120: INFO: Latency metrics for node iruya-worker2 Mar 24 13:48:59.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9561" for this suite. Mar 24 13:49:21.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:49:21.214: INFO: namespace statefulset-9561 deletion completed in 22.090432955s • Failure [339.598 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 13:48:45.745: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:49:21.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-4b9b STEP: Creating a pod to test atomic-volume-subpath Mar 24 13:49:21.291: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4b9b" in namespace "subpath-4186" to be "success or failure" Mar 24 13:49:21.311: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066455ms Mar 24 13:49:23.315: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024486356s Mar 24 13:49:25.319: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.028073055s Mar 24 13:49:27.322: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.03140779s Mar 24 13:49:29.326: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.035447299s Mar 24 13:49:31.331: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.039896267s Mar 24 13:49:33.335: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.043685461s Mar 24 13:49:35.339: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.048032835s Mar 24 13:49:37.343: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.052303218s Mar 24 13:49:39.348: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.057221076s Mar 24 13:49:41.353: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.061993699s Mar 24 13:49:43.357: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Running", Reason="", readiness=true. Elapsed: 22.066527747s Mar 24 13:49:45.361: INFO: Pod "pod-subpath-test-downwardapi-4b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070224911s STEP: Saw pod success Mar 24 13:49:45.361: INFO: Pod "pod-subpath-test-downwardapi-4b9b" satisfied condition "success or failure" Mar 24 13:49:45.363: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-4b9b container test-container-subpath-downwardapi-4b9b: STEP: delete the pod Mar 24 13:49:45.396: INFO: Waiting for pod pod-subpath-test-downwardapi-4b9b to disappear Mar 24 13:49:45.439: INFO: Pod pod-subpath-test-downwardapi-4b9b no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4b9b Mar 24 13:49:45.439: INFO: Deleting pod "pod-subpath-test-downwardapi-4b9b" in namespace "subpath-4186" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:49:45.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4186" for this suite. Mar 24 13:49:51.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:49:51.531: INFO: namespace subpath-4186 deletion completed in 6.085761647s • [SLOW TEST:30.317 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:49:51.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d in namespace container-probe-776 Mar 24 13:49:55.621: INFO: Started pod liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d in namespace container-probe-776 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 13:49:55.625: INFO: Initial restart count of pod liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is 0 Mar 24 13:50:15.670: INFO: Restart count of pod container-probe-776/liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is now 1 (20.045335024s elapsed) Mar 24 13:50:35.712: INFO: Restart count of pod container-probe-776/liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is now 2 (40.086555749s elapsed) Mar 24 13:50:55.755: INFO: Restart count of pod container-probe-776/liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is now 3 (1m0.129659752s elapsed) Mar 24 13:51:15.820: INFO: Restart count of pod container-probe-776/liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is now 4 (1m20.194406341s elapsed) Mar 24 13:52:23.994: INFO: Restart count of pod container-probe-776/liveness-0c198a82-2a0f-4fd0-b089-0f5052aaef6d is now 5 (2m28.368750684s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:52:24.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-776" for this suite. Mar 24 13:52:30.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:52:30.102: INFO: namespace container-probe-776 deletion completed in 6.089457763s • [SLOW TEST:158.570 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:52:30.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bad6592f-769c-4580-bba7-1a2ef84fa6ff STEP: Creating a pod to test consume configMaps Mar 24 13:52:30.188: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5" in namespace "projected-6274" to be "success or failure" Mar 24 13:52:30.191: INFO: Pod "pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.881727ms Mar 24 13:52:32.206: INFO: Pod "pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018789825s Mar 24 13:52:34.210: INFO: Pod "pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022601768s STEP: Saw pod success Mar 24 13:52:34.210: INFO: Pod "pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5" satisfied condition "success or failure" Mar 24 13:52:34.213: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5 container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:52:34.233: INFO: Waiting for pod pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5 to disappear Mar 24 13:52:34.237: INFO: Pod pod-projected-configmaps-9f9ccf4a-d8c8-4adf-bd48-1d07957a8ea5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:52:34.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6274" for this suite. Mar 24 13:52:40.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:52:40.352: INFO: namespace projected-6274 deletion completed in 6.111749246s • [SLOW TEST:10.249 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:52:40.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-428 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 13:52:40.407: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 24 13:53:06.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostName&protocol=udp&host=10.244.2.220&port=8081&tries=1'] Namespace:pod-network-test-428 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 13:53:06.639: INFO: >>> kubeConfig: /root/.kube/config I0324 13:53:06.675899 6 log.go:172] (0xc002436580) (0xc001dea640) Create stream I0324 13:53:06.675935 6 log.go:172] (0xc002436580) (0xc001dea640) Stream added, broadcasting: 1 I0324 13:53:06.678317 6 log.go:172] (0xc002436580) Reply frame received for 1 I0324 13:53:06.678371 6 log.go:172] (0xc002436580) (0xc002d4b4a0) Create stream I0324 13:53:06.678387 6 log.go:172] (0xc002436580) (0xc002d4b4a0) Stream added, broadcasting: 3 I0324 13:53:06.679450 6 log.go:172] (0xc002436580) Reply frame received for 3 I0324 13:53:06.679492 6 log.go:172] (0xc002436580) (0xc002d4b540) Create stream I0324 13:53:06.679507 6 log.go:172] (0xc002436580) (0xc002d4b540) Stream added, broadcasting: 5 I0324 13:53:06.680562 6 log.go:172] (0xc002436580) Reply frame received for 5 I0324 13:53:06.761787 6 log.go:172] (0xc002436580) Data frame received for 3 I0324 13:53:06.761814 6 log.go:172] (0xc002d4b4a0) (3) Data frame handling I0324 13:53:06.761831 6 log.go:172] (0xc002d4b4a0) (3) Data frame sent I0324 13:53:06.762687 6 log.go:172] (0xc002436580) Data frame received for 3 I0324 13:53:06.762723 6 log.go:172] (0xc002d4b4a0) (3) Data frame handling I0324 13:53:06.762748 6 log.go:172] (0xc002436580) Data frame received for 5 I0324 13:53:06.762764 6 log.go:172] (0xc002d4b540) (5) Data frame handling I0324 13:53:06.768920 6 log.go:172] (0xc002436580) Data frame received for 1 I0324 13:53:06.768952 6 log.go:172] (0xc001dea640) (1) Data frame handling I0324 13:53:06.768987 6 log.go:172] (0xc001dea640) (1) Data frame sent I0324 13:53:06.769009 6 log.go:172] (0xc002436580) (0xc001dea640) Stream removed, broadcasting: 1 I0324 13:53:06.769096 6 log.go:172] (0xc002436580) (0xc001dea640) Stream removed, broadcasting: 1 I0324 13:53:06.769230 6 log.go:172] (0xc002436580) (0xc002d4b4a0) Stream removed, broadcasting: 3 I0324 13:53:06.769242 6 log.go:172] (0xc002436580) (0xc002d4b540) Stream removed, broadcasting: 5 I0324 13:53:06.769266 6 log.go:172] (0xc002436580) Go away received Mar 24 13:53:06.769: INFO: Waiting for endpoints: map[] Mar 24 13:53:06.772: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostName&protocol=udp&host=10.244.1.83&port=8081&tries=1'] Namespace:pod-network-test-428 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 13:53:06.772: INFO: >>> kubeConfig: /root/.kube/config I0324 13:53:06.799984 6 log.go:172] (0xc001554bb0) (0xc0030c1360) Create stream I0324 13:53:06.800025 6 log.go:172] (0xc001554bb0) (0xc0030c1360) Stream added, broadcasting: 1 I0324 13:53:06.802387 6 log.go:172] (0xc001554bb0) Reply frame received for 1 I0324 13:53:06.802429 6 log.go:172] (0xc001554bb0) (0xc00149ec80) Create stream I0324 13:53:06.802443 6 log.go:172] (0xc001554bb0) (0xc00149ec80) Stream added, broadcasting: 3 I0324 13:53:06.803573 6 log.go:172] (0xc001554bb0) Reply frame received for 3 I0324 13:53:06.803623 6 log.go:172] (0xc001554bb0) (0xc0030c14a0) Create stream I0324 13:53:06.803639 6 log.go:172] (0xc001554bb0) (0xc0030c14a0) Stream added, broadcasting: 5 I0324 13:53:06.804661 6 log.go:172] (0xc001554bb0) Reply frame received for 5 I0324 13:53:06.876624 6 log.go:172] (0xc001554bb0) Data frame received for 3 I0324 13:53:06.876675 6 log.go:172] (0xc00149ec80) (3) Data frame handling I0324 13:53:06.876708 6 log.go:172] (0xc00149ec80) (3) Data frame sent I0324 13:53:06.877093 6 log.go:172] (0xc001554bb0) Data frame received for 5 I0324 13:53:06.877309 6 log.go:172] (0xc0030c14a0) (5) Data frame handling I0324 13:53:06.877477 6 log.go:172] (0xc001554bb0) Data frame received for 3 I0324 13:53:06.877514 6 log.go:172] (0xc00149ec80) (3) Data frame handling I0324 13:53:06.879626 6 log.go:172] (0xc001554bb0) Data frame received for 1 I0324 13:53:06.879654 6 log.go:172] (0xc0030c1360) (1) Data frame handling I0324 13:53:06.879669 6 log.go:172] (0xc0030c1360) (1) Data frame sent I0324 13:53:06.879702 6 log.go:172] (0xc001554bb0) (0xc0030c1360) Stream removed, broadcasting: 1 I0324 13:53:06.879720 6 log.go:172] (0xc001554bb0) Go away received I0324 13:53:06.879967 6 log.go:172] (0xc001554bb0) (0xc0030c1360) Stream removed, broadcasting: 1 I0324 13:53:06.880031 6 log.go:172] (0xc001554bb0) (0xc00149ec80) Stream removed, broadcasting: 3 I0324 13:53:06.880055 6 log.go:172] (0xc001554bb0) (0xc0030c14a0) Stream removed, broadcasting: 5 Mar 24 13:53:06.880: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:53:06.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-428" for this suite. Mar 24 13:53:28.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:53:28.971: INFO: namespace pod-network-test-428 deletion completed in 22.087086116s • [SLOW TEST:48.619 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:53:28.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-218c0f63-b43f-461f-9d48-5bf1fd02f721 STEP: Creating a pod to test consume configMaps Mar 24 13:53:29.058: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226" in namespace "projected-8609" to be "success or failure" Mar 24 13:53:29.071: INFO: Pod "pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226": Phase="Pending", Reason="", readiness=false. Elapsed: 13.104204ms Mar 24 13:53:31.074: INFO: Pod "pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016523778s Mar 24 13:53:33.077: INFO: Pod "pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019584195s STEP: Saw pod success Mar 24 13:53:33.077: INFO: Pod "pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226" satisfied condition "success or failure" Mar 24 13:53:33.080: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226 container projected-configmap-volume-test: STEP: delete the pod Mar 24 13:53:33.110: INFO: Waiting for pod pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226 to disappear Mar 24 13:53:33.147: INFO: Pod pod-projected-configmaps-bf24dbe8-61a9-4483-84ab-2a88071ee226 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:53:33.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8609" for this suite. Mar 24 13:53:39.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:53:39.246: INFO: namespace projected-8609 deletion completed in 6.095600644s • [SLOW TEST:10.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:53:39.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f430f110-5c21-42b0-af11-2b458441c803 STEP: Creating a pod to test consume secrets Mar 24 13:53:39.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393" in namespace "projected-3234" to be "success or failure" Mar 24 13:53:39.343: INFO: Pod "pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393": Phase="Pending", Reason="", readiness=false. Elapsed: 18.405518ms Mar 24 13:53:41.347: INFO: Pod "pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022596235s Mar 24 13:53:43.351: INFO: Pod "pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026970502s STEP: Saw pod success Mar 24 13:53:43.351: INFO: Pod "pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393" satisfied condition "success or failure" Mar 24 13:53:43.355: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393 container projected-secret-volume-test: STEP: delete the pod Mar 24 13:53:43.385: INFO: Waiting for pod pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393 to disappear Mar 24 13:53:43.402: INFO: Pod pod-projected-secrets-8fb92f0c-2e97-4091-9835-8a5ec3803393 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:53:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3234" for this suite. Mar 24 13:53:49.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:53:49.487: INFO: namespace projected-3234 deletion completed in 6.080812515s • [SLOW TEST:10.241 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:53:49.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c054ce2-6d87-4161-83d6-386dc43c6ebf STEP: Creating a pod to test consume secrets Mar 24 13:53:49.753: INFO: Waiting up to 5m0s for pod "pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184" in namespace "secrets-2171" to be "success or failure" Mar 24 13:53:49.762: INFO: Pod "pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566964ms Mar 24 13:53:51.766: INFO: Pod "pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01263954s Mar 24 13:53:53.770: INFO: Pod "pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016921446s STEP: Saw pod success Mar 24 13:53:53.770: INFO: Pod "pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184" satisfied condition "success or failure" Mar 24 13:53:53.773: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184 container secret-env-test: STEP: delete the pod Mar 24 13:53:53.807: INFO: Waiting for pod pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184 to disappear Mar 24 13:53:53.816: INFO: Pod pod-secrets-7632a536-75be-4ad4-99d1-c3361ae2e184 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:53:53.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2171" for this suite. Mar 24 13:53:59.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:53:59.913: INFO: namespace secrets-2171 deletion completed in 6.093938211s • [SLOW TEST:10.426 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:53:59.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:54:26.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2313" for this suite. Mar 24 13:54:32.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:54:32.240: INFO: namespace namespaces-2313 deletion completed in 6.101319119s STEP: Destroying namespace "nsdeletetest-42" for this suite. Mar 24 13:54:32.242: INFO: Namespace nsdeletetest-42 was already deleted STEP: Destroying namespace "nsdeletetest-6695" for this suite. Mar 24 13:54:38.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:54:38.364: INFO: namespace nsdeletetest-6695 deletion completed in 6.121974582s • [SLOW TEST:38.451 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:54:38.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 24 13:54:38.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603374,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 24 13:54:38.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603374,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 24 13:54:48.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603395,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 24 13:54:48.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603395,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 24 13:54:58.447: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603415,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 24 13:54:58.447: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603415,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 24 13:55:08.454: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603435,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 24 13:55:08.454: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-a,UID:15dfa5c7-7fb5-494c-b8bd-e238d7d2b77a,ResourceVersion:1603435,Generation:0,CreationTimestamp:2020-03-24 13:54:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 24 13:55:18.460: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-b,UID:405469c2-f149-4464-9cae-2f72f42fcca6,ResourceVersion:1603456,Generation:0,CreationTimestamp:2020-03-24 13:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 24 13:55:18.460: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-b,UID:405469c2-f149-4464-9cae-2f72f42fcca6,ResourceVersion:1603456,Generation:0,CreationTimestamp:2020-03-24 13:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 24 13:55:28.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-b,UID:405469c2-f149-4464-9cae-2f72f42fcca6,ResourceVersion:1603476,Generation:0,CreationTimestamp:2020-03-24 13:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 24 13:55:28.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9075,SelfLink:/api/v1/namespaces/watch-9075/configmaps/e2e-watch-test-configmap-b,UID:405469c2-f149-4464-9cae-2f72f42fcca6,ResourceVersion:1603476,Generation:0,CreationTimestamp:2020-03-24 13:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:55:38.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9075" for this suite. Mar 24 13:55:44.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:55:44.607: INFO: namespace watch-9075 deletion completed in 6.135437042s • [SLOW TEST:66.242 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:55:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 24 13:55:49.184: INFO: Successfully updated pod "annotationupdateca4a2f56-1a7d-436a-b044-7ec81c09c44b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:55:51.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4023" for this suite. Mar 24 13:56:13.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:56:13.330: INFO: namespace projected-4023 deletion completed in 22.111417044s • [SLOW TEST:28.722 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:56:13.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:56:17.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3514" for this suite. Mar 24 13:56:55.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:56:55.551: INFO: namespace kubelet-test-3514 deletion completed in 38.099050868s • [SLOW TEST:42.221 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:56:55.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6336, will wait for the garbage collector to delete the pods Mar 24 13:57:01.675: INFO: Deleting Job.batch foo took: 5.868687ms Mar 24 13:57:01.975: INFO: Terminating Job.batch foo pods took: 300.318014ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:57:41.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6336" for this suite. Mar 24 13:57:47.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:57:47.971: INFO: namespace job-6336 deletion completed in 6.089824177s • [SLOW TEST:52.420 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:57:47.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:57:48.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65" in namespace "projected-6614" to be "success or failure" Mar 24 13:57:48.066: INFO: Pod "downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65": Phase="Pending", Reason="", readiness=false. Elapsed: 7.98327ms Mar 24 13:57:50.091: INFO: Pod "downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033073977s Mar 24 13:57:52.096: INFO: Pod "downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037802608s STEP: Saw pod success Mar 24 13:57:52.096: INFO: Pod "downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65" satisfied condition "success or failure" Mar 24 13:57:52.100: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65 container client-container: STEP: delete the pod Mar 24 13:57:52.121: INFO: Waiting for pod downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65 to disappear Mar 24 13:57:52.126: INFO: Pod downwardapi-volume-05a621ef-cf9f-41d3-a5f9-a93a675b6a65 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:57:52.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6614" for this suite. Mar 24 13:57:58.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:57:58.259: INFO: namespace projected-6614 deletion completed in 6.129909628s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:57:58.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 24 13:57:58.319: INFO: Waiting up to 5m0s for pod "pod-0a254538-22f4-472d-8dbc-f0a49c2349c4" in namespace "emptydir-8420" to be "success or failure" Mar 24 13:57:58.324: INFO: Pod "pod-0a254538-22f4-472d-8dbc-f0a49c2349c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385222ms Mar 24 13:58:00.372: INFO: Pod "pod-0a254538-22f4-472d-8dbc-f0a49c2349c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053056155s Mar 24 13:58:02.376: INFO: Pod "pod-0a254538-22f4-472d-8dbc-f0a49c2349c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057039218s STEP: Saw pod success Mar 24 13:58:02.376: INFO: Pod "pod-0a254538-22f4-472d-8dbc-f0a49c2349c4" satisfied condition "success or failure" Mar 24 13:58:02.379: INFO: Trying to get logs from node iruya-worker pod pod-0a254538-22f4-472d-8dbc-f0a49c2349c4 container test-container: STEP: delete the pod Mar 24 13:58:02.397: INFO: Waiting for pod pod-0a254538-22f4-472d-8dbc-f0a49c2349c4 to disappear Mar 24 13:58:02.414: INFO: Pod pod-0a254538-22f4-472d-8dbc-f0a49c2349c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:58:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8420" for this suite. Mar 24 13:58:08.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:58:08.527: INFO: namespace emptydir-8420 deletion completed in 6.109343036s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:58:08.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 24 13:58:08.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 13:58:08.604: INFO: Waiting for terminating namespaces to be deleted... Mar 24 13:58:08.606: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 24 13:58:08.611: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.611: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:58:08.611: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.611: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:58:08.611: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 24 13:58:08.616: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.617: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 13:58:08.617: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.617: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 13:58:08.617: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.617: INFO: Container coredns ready: true, restart count 0 Mar 24 13:58:08.617: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 24 13:58:08.617: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-52c949fe-6eef-4158-8c8e-95b7652288cf 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-52c949fe-6eef-4158-8c8e-95b7652288cf off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-52c949fe-6eef-4158-8c8e-95b7652288cf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:58:16.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5659" for this suite. Mar 24 13:58:26.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:58:26.873: INFO: namespace sched-pred-5659 deletion completed in 10.088727844s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.346 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:58:26.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 13:58:26.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124" in namespace "downward-api-2761" to be "success or failure" Mar 24 13:58:26.965: INFO: Pod "downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124": Phase="Pending", Reason="", readiness=false. Elapsed: 9.971944ms Mar 24 13:58:28.977: INFO: Pod "downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022064536s Mar 24 13:58:30.981: INFO: Pod "downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025455522s STEP: Saw pod success Mar 24 13:58:30.981: INFO: Pod "downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124" satisfied condition "success or failure" Mar 24 13:58:30.983: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124 container client-container: STEP: delete the pod Mar 24 13:58:31.002: INFO: Waiting for pod downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124 to disappear Mar 24 13:58:31.013: INFO: Pod downwardapi-volume-0cf683de-2ee1-450a-9007-d79389393124 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:58:31.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2761" for this suite. Mar 24 13:58:37.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:58:37.118: INFO: namespace downward-api-2761 deletion completed in 6.101404444s • [SLOW TEST:10.244 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:58:37.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 13:58:41.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2677" for this suite. Mar 24 13:59:19.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 13:59:19.339: INFO: namespace kubelet-test-2677 deletion completed in 38.101038878s • [SLOW TEST:42.221 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 13:59:19.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-40bb1744-09ba-42db-a37f-4860e35c2e85 STEP: Creating secret with name s-test-opt-upd-64332743-4c51-4621-9684-9d2dce44bc8f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-40bb1744-09ba-42db-a37f-4860e35c2e85 STEP: Updating secret s-test-opt-upd-64332743-4c51-4621-9684-9d2dce44bc8f STEP: Creating secret with name s-test-opt-create-5b6441b6-3351-4a86-ae1c-65432d1958f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:00:33.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1389" for this suite. Mar 24 14:00:55.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:00:55.947: INFO: namespace secrets-1389 deletion completed in 22.109686526s • [SLOW TEST:96.608 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:00:55.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 24 14:00:56.006: INFO: Waiting up to 5m0s for pod "downward-api-84c86c0f-baff-412e-915b-d24da54ec39b" in namespace "downward-api-7128" to be "success or failure" Mar 24 14:00:56.010: INFO: Pod "downward-api-84c86c0f-baff-412e-915b-d24da54ec39b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.829086ms Mar 24 14:00:58.040: INFO: Pod "downward-api-84c86c0f-baff-412e-915b-d24da54ec39b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03377519s Mar 24 14:01:00.044: INFO: Pod "downward-api-84c86c0f-baff-412e-915b-d24da54ec39b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037632093s STEP: Saw pod success Mar 24 14:01:00.044: INFO: Pod "downward-api-84c86c0f-baff-412e-915b-d24da54ec39b" satisfied condition "success or failure" Mar 24 14:01:00.047: INFO: Trying to get logs from node iruya-worker2 pod downward-api-84c86c0f-baff-412e-915b-d24da54ec39b container dapi-container: STEP: delete the pod Mar 24 14:01:00.083: INFO: Waiting for pod downward-api-84c86c0f-baff-412e-915b-d24da54ec39b to disappear Mar 24 14:01:00.088: INFO: Pod downward-api-84c86c0f-baff-412e-915b-d24da54ec39b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:01:00.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7128" for this suite. Mar 24 14:01:06.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:01:06.184: INFO: namespace downward-api-7128 deletion completed in 6.092578889s • [SLOW TEST:10.237 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:01:06.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4892 I0324 14:01:06.239537 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4892, replica count: 1 I0324 14:01:07.290094 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 14:01:08.290363 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 14:01:09.290569 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 24 14:01:09.434: INFO: Created: latency-svc-vqfxf Mar 24 14:01:09.447: INFO: Got endpoints: latency-svc-vqfxf [57.235773ms] Mar 24 14:01:09.483: INFO: Created: latency-svc-wr9px Mar 24 14:01:09.491: INFO: Got endpoints: latency-svc-wr9px [42.846728ms] Mar 24 14:01:09.506: INFO: Created: latency-svc-zzlhh Mar 24 14:01:09.515: INFO: Got endpoints: latency-svc-zzlhh [66.928682ms] Mar 24 14:01:09.567: INFO: Created: latency-svc-97s4f Mar 24 14:01:09.570: INFO: Got endpoints: latency-svc-97s4f [121.916016ms] Mar 24 14:01:09.600: INFO: Created: latency-svc-2wg26 Mar 24 14:01:09.611: INFO: Got endpoints: latency-svc-2wg26 [162.961269ms] Mar 24 14:01:09.631: INFO: Created: latency-svc-4nndr Mar 24 14:01:09.640: INFO: Got endpoints: latency-svc-4nndr [191.70545ms] Mar 24 14:01:09.662: INFO: Created: latency-svc-jz72q Mar 24 14:01:09.723: INFO: Got endpoints: latency-svc-jz72q [273.469673ms] Mar 24 14:01:09.730: INFO: Created: latency-svc-kb4hz Mar 24 14:01:09.736: INFO: Got endpoints: latency-svc-kb4hz [286.987569ms] Mar 24 14:01:09.771: INFO: Created: latency-svc-zrqsv Mar 24 14:01:09.780: INFO: Got endpoints: latency-svc-zrqsv [331.193659ms] Mar 24 14:01:09.798: INFO: Created: latency-svc-5jb7c Mar 24 14:01:09.822: INFO: Got endpoints: latency-svc-5jb7c [372.426425ms] Mar 24 14:01:09.887: INFO: Created: latency-svc-twbf8 Mar 24 14:01:09.895: INFO: Got endpoints: latency-svc-twbf8 [445.290905ms] Mar 24 14:01:09.914: INFO: Created: latency-svc-8wwg4 Mar 24 14:01:09.932: INFO: Got endpoints: latency-svc-8wwg4 [482.244953ms] Mar 24 14:01:09.964: INFO: Created: latency-svc-bpqpz Mar 24 14:01:09.974: INFO: Got endpoints: latency-svc-bpqpz [524.13756ms] Mar 24 14:01:10.042: INFO: Created: latency-svc-l4f8s Mar 24 14:01:10.045: INFO: Got endpoints: latency-svc-l4f8s [595.236674ms] Mar 24 14:01:10.093: INFO: Created: latency-svc-mf8mh Mar 24 14:01:10.106: INFO: Got endpoints: latency-svc-mf8mh [655.247677ms] Mar 24 14:01:10.131: INFO: Created: latency-svc-ctc9k Mar 24 14:01:10.166: INFO: Got endpoints: latency-svc-ctc9k [715.106365ms] Mar 24 14:01:10.179: INFO: Created: latency-svc-hdgt4 Mar 24 14:01:10.191: INFO: Got endpoints: latency-svc-hdgt4 [700.114676ms] Mar 24 14:01:10.208: INFO: Created: latency-svc-lqh2p Mar 24 14:01:10.220: INFO: Got endpoints: latency-svc-lqh2p [705.442451ms] Mar 24 14:01:10.242: INFO: Created: latency-svc-xfc2k Mar 24 14:01:10.257: INFO: Got endpoints: latency-svc-xfc2k [686.550868ms] Mar 24 14:01:10.304: INFO: Created: latency-svc-w297t Mar 24 14:01:10.307: INFO: Got endpoints: latency-svc-w297t [695.87864ms] Mar 24 14:01:10.340: INFO: Created: latency-svc-pvrjg Mar 24 14:01:10.364: INFO: Got endpoints: latency-svc-pvrjg [723.353709ms] Mar 24 14:01:10.388: INFO: Created: latency-svc-6wnmk Mar 24 14:01:10.400: INFO: Got endpoints: latency-svc-6wnmk [677.709119ms] Mar 24 14:01:10.454: INFO: Created: latency-svc-cfzc4 Mar 24 14:01:10.470: INFO: Got endpoints: latency-svc-cfzc4 [734.07609ms] Mar 24 14:01:10.500: INFO: Created: latency-svc-tgwzb Mar 24 14:01:10.509: INFO: Got endpoints: latency-svc-tgwzb [728.837669ms] Mar 24 14:01:10.532: INFO: Created: latency-svc-cl5nq Mar 24 14:01:10.545: INFO: Got endpoints: latency-svc-cl5nq [723.119707ms] Mar 24 14:01:10.615: INFO: Created: latency-svc-gclgh Mar 24 14:01:10.619: INFO: Got endpoints: latency-svc-gclgh [723.959267ms] Mar 24 14:01:10.656: INFO: Created: latency-svc-7xx5w Mar 24 14:01:10.672: INFO: Got endpoints: latency-svc-7xx5w [739.743238ms] Mar 24 14:01:10.692: INFO: Created: latency-svc-lsnf8 Mar 24 14:01:10.702: INFO: Got endpoints: latency-svc-lsnf8 [727.641282ms] Mar 24 14:01:10.754: INFO: Created: latency-svc-7fg2r Mar 24 14:01:10.757: INFO: Got endpoints: latency-svc-7fg2r [712.037135ms] Mar 24 14:01:10.802: INFO: Created: latency-svc-pqhgn Mar 24 14:01:10.823: INFO: Got endpoints: latency-svc-pqhgn [717.097357ms] Mar 24 14:01:10.903: INFO: Created: latency-svc-9dxml Mar 24 14:01:10.914: INFO: Got endpoints: latency-svc-9dxml [748.419428ms] Mar 24 14:01:10.932: INFO: Created: latency-svc-ftqv7 Mar 24 14:01:10.944: INFO: Got endpoints: latency-svc-ftqv7 [753.524887ms] Mar 24 14:01:10.971: INFO: Created: latency-svc-jvzjw Mar 24 14:01:10.986: INFO: Got endpoints: latency-svc-jvzjw [765.687277ms] Mar 24 14:01:11.034: INFO: Created: latency-svc-m74tz Mar 24 14:01:11.037: INFO: Got endpoints: latency-svc-m74tz [780.471138ms] Mar 24 14:01:11.076: INFO: Created: latency-svc-7rwxt Mar 24 14:01:11.089: INFO: Got endpoints: latency-svc-7rwxt [781.696026ms] Mar 24 14:01:11.106: INFO: Created: latency-svc-q2pxw Mar 24 14:01:11.119: INFO: Got endpoints: latency-svc-q2pxw [754.894845ms] Mar 24 14:01:11.173: INFO: Created: latency-svc-gjwxm Mar 24 14:01:11.186: INFO: Got endpoints: latency-svc-gjwxm [785.537452ms] Mar 24 14:01:11.228: INFO: Created: latency-svc-cdwp7 Mar 24 14:01:11.233: INFO: Got endpoints: latency-svc-cdwp7 [762.925857ms] Mar 24 14:01:11.259: INFO: Created: latency-svc-cq4mg Mar 24 14:01:11.270: INFO: Got endpoints: latency-svc-cq4mg [760.907786ms] Mar 24 14:01:11.334: INFO: Created: latency-svc-wf98c Mar 24 14:01:11.376: INFO: Got endpoints: latency-svc-wf98c [830.150851ms] Mar 24 14:01:11.402: INFO: Created: latency-svc-rqmml Mar 24 14:01:11.429: INFO: Got endpoints: latency-svc-rqmml [809.898486ms] Mar 24 14:01:11.451: INFO: Created: latency-svc-xgzw8 Mar 24 14:01:11.469: INFO: Got endpoints: latency-svc-xgzw8 [796.971841ms] Mar 24 14:01:11.502: INFO: Created: latency-svc-zn6kv Mar 24 14:01:11.517: INFO: Got endpoints: latency-svc-zn6kv [815.018055ms] Mar 24 14:01:11.561: INFO: Created: latency-svc-g5wq8 Mar 24 14:01:11.565: INFO: Got endpoints: latency-svc-g5wq8 [806.937727ms] Mar 24 14:01:11.592: INFO: Created: latency-svc-sjhz7 Mar 24 14:01:11.608: INFO: Got endpoints: latency-svc-sjhz7 [784.770959ms] Mar 24 14:01:11.624: INFO: Created: latency-svc-t549h Mar 24 14:01:11.639: INFO: Got endpoints: latency-svc-t549h [724.410169ms] Mar 24 14:01:11.654: INFO: Created: latency-svc-jlqvl Mar 24 14:01:11.693: INFO: Got endpoints: latency-svc-jlqvl [748.139999ms] Mar 24 14:01:11.708: INFO: Created: latency-svc-vplls Mar 24 14:01:11.725: INFO: Got endpoints: latency-svc-vplls [738.863088ms] Mar 24 14:01:11.773: INFO: Created: latency-svc-gbs97 Mar 24 14:01:11.790: INFO: Got endpoints: latency-svc-gbs97 [752.771285ms] Mar 24 14:01:11.830: INFO: Created: latency-svc-gcmzr Mar 24 14:01:11.837: INFO: Got endpoints: latency-svc-gcmzr [747.442391ms] Mar 24 14:01:11.858: INFO: Created: latency-svc-cndfk Mar 24 14:01:11.872: INFO: Got endpoints: latency-svc-cndfk [753.246112ms] Mar 24 14:01:11.895: INFO: Created: latency-svc-67q8f Mar 24 14:01:11.915: INFO: Got endpoints: latency-svc-67q8f [728.585482ms] Mar 24 14:01:11.968: INFO: Created: latency-svc-nntd4 Mar 24 14:01:11.975: INFO: Got endpoints: latency-svc-nntd4 [741.022339ms] Mar 24 14:01:11.994: INFO: Created: latency-svc-6cg8v Mar 24 14:01:12.005: INFO: Got endpoints: latency-svc-6cg8v [735.190662ms] Mar 24 14:01:12.027: INFO: Created: latency-svc-zwb8c Mar 24 14:01:12.056: INFO: Got endpoints: latency-svc-zwb8c [680.079419ms] Mar 24 14:01:12.118: INFO: Created: latency-svc-xtz7z Mar 24 14:01:12.144: INFO: Got endpoints: latency-svc-xtz7z [715.172037ms] Mar 24 14:01:12.175: INFO: Created: latency-svc-66g94 Mar 24 14:01:12.186: INFO: Got endpoints: latency-svc-66g94 [717.050891ms] Mar 24 14:01:12.206: INFO: Created: latency-svc-lzq2r Mar 24 14:01:12.273: INFO: Got endpoints: latency-svc-lzq2r [756.339872ms] Mar 24 14:01:12.278: INFO: Created: latency-svc-nkvjf Mar 24 14:01:12.294: INFO: Got endpoints: latency-svc-nkvjf [729.777473ms] Mar 24 14:01:12.314: INFO: Created: latency-svc-9tgqx Mar 24 14:01:12.331: INFO: Got endpoints: latency-svc-9tgqx [723.23846ms] Mar 24 14:01:12.348: INFO: Created: latency-svc-fkccq Mar 24 14:01:12.361: INFO: Got endpoints: latency-svc-fkccq [722.584889ms] Mar 24 14:01:12.435: INFO: Created: latency-svc-d5vjn Mar 24 14:01:12.439: INFO: Got endpoints: latency-svc-d5vjn [745.963923ms] Mar 24 14:01:12.464: INFO: Created: latency-svc-rx42z Mar 24 14:01:12.488: INFO: Got endpoints: latency-svc-rx42z [762.545784ms] Mar 24 14:01:12.518: INFO: Created: latency-svc-kz7w7 Mar 24 14:01:12.530: INFO: Got endpoints: latency-svc-kz7w7 [740.033293ms] Mar 24 14:01:12.579: INFO: Created: latency-svc-fv7sd Mar 24 14:01:12.612: INFO: Got endpoints: latency-svc-fv7sd [775.29928ms] Mar 24 14:01:12.663: INFO: Created: latency-svc-xmctn Mar 24 14:01:12.675: INFO: Got endpoints: latency-svc-xmctn [802.500364ms] Mar 24 14:01:12.717: INFO: Created: latency-svc-7lt26 Mar 24 14:01:12.729: INFO: Got endpoints: latency-svc-7lt26 [813.927ms] Mar 24 14:01:12.750: INFO: Created: latency-svc-f6dvn Mar 24 14:01:12.771: INFO: Got endpoints: latency-svc-f6dvn [796.640687ms] Mar 24 14:01:12.793: INFO: Created: latency-svc-kn5gh Mar 24 14:01:12.830: INFO: Got endpoints: latency-svc-kn5gh [824.943344ms] Mar 24 14:01:12.860: INFO: Created: latency-svc-b2vsv Mar 24 14:01:12.898: INFO: Got endpoints: latency-svc-b2vsv [842.400554ms] Mar 24 14:01:12.920: INFO: Created: latency-svc-2hd7l Mar 24 14:01:12.974: INFO: Got endpoints: latency-svc-2hd7l [829.785085ms] Mar 24 14:01:13.008: INFO: Created: latency-svc-x4hdk Mar 24 14:01:13.030: INFO: Got endpoints: latency-svc-x4hdk [844.363491ms] Mar 24 14:01:13.074: INFO: Created: latency-svc-skb2g Mar 24 14:01:13.109: INFO: Got endpoints: latency-svc-skb2g [835.59818ms] Mar 24 14:01:13.179: INFO: Created: latency-svc-fpgb4 Mar 24 14:01:13.199: INFO: Got endpoints: latency-svc-fpgb4 [904.605121ms] Mar 24 14:01:13.242: INFO: Created: latency-svc-vctrn Mar 24 14:01:13.245: INFO: Got endpoints: latency-svc-vctrn [914.007637ms] Mar 24 14:01:13.297: INFO: Created: latency-svc-5x68x Mar 24 14:01:13.302: INFO: Got endpoints: latency-svc-5x68x [940.778442ms] Mar 24 14:01:13.332: INFO: Created: latency-svc-fmh7c Mar 24 14:01:13.393: INFO: Got endpoints: latency-svc-fmh7c [954.298116ms] Mar 24 14:01:13.396: INFO: Created: latency-svc-wgpjj Mar 24 14:01:13.404: INFO: Got endpoints: latency-svc-wgpjj [916.418375ms] Mar 24 14:01:13.424: INFO: Created: latency-svc-l5pzg Mar 24 14:01:13.434: INFO: Got endpoints: latency-svc-l5pzg [904.179205ms] Mar 24 14:01:13.452: INFO: Created: latency-svc-8xpxc Mar 24 14:01:13.488: INFO: Got endpoints: latency-svc-8xpxc [875.522269ms] Mar 24 14:01:13.532: INFO: Created: latency-svc-9q297 Mar 24 14:01:13.543: INFO: Got endpoints: latency-svc-9q297 [868.134967ms] Mar 24 14:01:13.562: INFO: Created: latency-svc-8l8tg Mar 24 14:01:13.573: INFO: Got endpoints: latency-svc-8l8tg [844.528178ms] Mar 24 14:01:13.592: INFO: Created: latency-svc-nfwjc Mar 24 14:01:13.604: INFO: Got endpoints: latency-svc-nfwjc [832.404551ms] Mar 24 14:01:13.619: INFO: Created: latency-svc-bjg52 Mar 24 14:01:13.662: INFO: Got endpoints: latency-svc-bjg52 [832.016706ms] Mar 24 14:01:13.668: INFO: Created: latency-svc-lrcbs Mar 24 14:01:13.682: INFO: Got endpoints: latency-svc-lrcbs [783.834921ms] Mar 24 14:01:13.704: INFO: Created: latency-svc-q2ctc Mar 24 14:01:13.718: INFO: Got endpoints: latency-svc-q2ctc [744.320746ms] Mar 24 14:01:13.736: INFO: Created: latency-svc-nzwvp Mar 24 14:01:13.759: INFO: Got endpoints: latency-svc-nzwvp [729.073432ms] Mar 24 14:01:13.819: INFO: Created: latency-svc-48srb Mar 24 14:01:13.822: INFO: Got endpoints: latency-svc-48srb [712.759027ms] Mar 24 14:01:13.860: INFO: Created: latency-svc-htt5k Mar 24 14:01:13.875: INFO: Got endpoints: latency-svc-htt5k [676.133095ms] Mar 24 14:01:13.896: INFO: Created: latency-svc-9ckbm Mar 24 14:01:13.912: INFO: Got endpoints: latency-svc-9ckbm [666.791437ms] Mar 24 14:01:13.950: INFO: Created: latency-svc-gln2b Mar 24 14:01:13.954: INFO: Got endpoints: latency-svc-gln2b [652.088883ms] Mar 24 14:01:13.976: INFO: Created: latency-svc-27jgr Mar 24 14:01:13.984: INFO: Got endpoints: latency-svc-27jgr [590.838987ms] Mar 24 14:01:14.007: INFO: Created: latency-svc-f2fpn Mar 24 14:01:14.014: INFO: Got endpoints: latency-svc-f2fpn [609.668283ms] Mar 24 14:01:14.034: INFO: Created: latency-svc-bth5x Mar 24 14:01:14.045: INFO: Got endpoints: latency-svc-bth5x [610.501414ms] Mar 24 14:01:14.106: INFO: Created: latency-svc-xckhx Mar 24 14:01:14.138: INFO: Got endpoints: latency-svc-xckhx [649.83151ms] Mar 24 14:01:14.168: INFO: Created: latency-svc-l4z8k Mar 24 14:01:14.177: INFO: Got endpoints: latency-svc-l4z8k [634.455176ms] Mar 24 14:01:14.198: INFO: Created: latency-svc-5ks9z Mar 24 14:01:14.237: INFO: Got endpoints: latency-svc-5ks9z [663.614371ms] Mar 24 14:01:14.256: INFO: Created: latency-svc-ml7rk Mar 24 14:01:14.280: INFO: Got endpoints: latency-svc-ml7rk [676.356829ms] Mar 24 14:01:14.298: INFO: Created: latency-svc-7vlpj Mar 24 14:01:14.310: INFO: Got endpoints: latency-svc-7vlpj [647.777117ms] Mar 24 14:01:14.328: INFO: Created: latency-svc-4s2s6 Mar 24 14:01:14.375: INFO: Got endpoints: latency-svc-4s2s6 [692.536472ms] Mar 24 14:01:14.384: INFO: Created: latency-svc-6mkv5 Mar 24 14:01:14.395: INFO: Got endpoints: latency-svc-6mkv5 [676.272693ms] Mar 24 14:01:14.414: INFO: Created: latency-svc-l4fbv Mar 24 14:01:14.435: INFO: Got endpoints: latency-svc-l4fbv [675.673841ms] Mar 24 14:01:14.459: INFO: Created: latency-svc-hzww9 Mar 24 14:01:14.474: INFO: Got endpoints: latency-svc-hzww9 [651.599573ms] Mar 24 14:01:14.508: INFO: Created: latency-svc-6z5lp Mar 24 14:01:14.522: INFO: Got endpoints: latency-svc-6z5lp [646.386349ms] Mar 24 14:01:14.543: INFO: Created: latency-svc-jr76d Mar 24 14:01:14.552: INFO: Got endpoints: latency-svc-jr76d [639.896229ms] Mar 24 14:01:14.570: INFO: Created: latency-svc-7xvkt Mar 24 14:01:14.582: INFO: Got endpoints: latency-svc-7xvkt [627.955398ms] Mar 24 14:01:14.606: INFO: Created: latency-svc-p5mq7 Mar 24 14:01:14.668: INFO: Got endpoints: latency-svc-p5mq7 [684.244747ms] Mar 24 14:01:14.671: INFO: Created: latency-svc-5lswz Mar 24 14:01:14.679: INFO: Got endpoints: latency-svc-5lswz [664.636557ms] Mar 24 14:01:14.706: INFO: Created: latency-svc-q5v2m Mar 24 14:01:14.721: INFO: Got endpoints: latency-svc-q5v2m [676.178744ms] Mar 24 14:01:14.738: INFO: Created: latency-svc-nl6vd Mar 24 14:01:14.762: INFO: Got endpoints: latency-svc-nl6vd [624.552016ms] Mar 24 14:01:14.825: INFO: Created: latency-svc-jz48d Mar 24 14:01:14.827: INFO: Got endpoints: latency-svc-jz48d [649.473437ms] Mar 24 14:01:14.856: INFO: Created: latency-svc-gwfgf Mar 24 14:01:14.872: INFO: Got endpoints: latency-svc-gwfgf [634.961512ms] Mar 24 14:01:14.910: INFO: Created: latency-svc-gcqcp Mar 24 14:01:14.951: INFO: Got endpoints: latency-svc-gcqcp [670.363162ms] Mar 24 14:01:14.972: INFO: Created: latency-svc-kw4w8 Mar 24 14:01:14.987: INFO: Got endpoints: latency-svc-kw4w8 [676.460966ms] Mar 24 14:01:15.009: INFO: Created: latency-svc-4wgh5 Mar 24 14:01:15.017: INFO: Got endpoints: latency-svc-4wgh5 [642.285664ms] Mar 24 14:01:15.100: INFO: Created: latency-svc-knsvp Mar 24 14:01:15.102: INFO: Got endpoints: latency-svc-knsvp [707.401247ms] Mar 24 14:01:15.164: INFO: Created: latency-svc-d4ckf Mar 24 14:01:15.194: INFO: Got endpoints: latency-svc-d4ckf [758.900183ms] Mar 24 14:01:15.250: INFO: Created: latency-svc-hndtn Mar 24 14:01:15.253: INFO: Got endpoints: latency-svc-hndtn [779.500796ms] Mar 24 14:01:15.276: INFO: Created: latency-svc-d979h Mar 24 14:01:15.288: INFO: Got endpoints: latency-svc-d979h [765.951919ms] Mar 24 14:01:15.306: INFO: Created: latency-svc-t5qh2 Mar 24 14:01:15.318: INFO: Got endpoints: latency-svc-t5qh2 [766.290103ms] Mar 24 14:01:15.336: INFO: Created: latency-svc-774xv Mar 24 14:01:15.348: INFO: Got endpoints: latency-svc-774xv [766.180436ms] Mar 24 14:01:15.411: INFO: Created: latency-svc-zwt2d Mar 24 14:01:15.414: INFO: Got endpoints: latency-svc-zwt2d [745.775762ms] Mar 24 14:01:15.458: INFO: Created: latency-svc-92s77 Mar 24 14:01:15.469: INFO: Got endpoints: latency-svc-92s77 [790.321888ms] Mar 24 14:01:15.486: INFO: Created: latency-svc-x2842 Mar 24 14:01:15.510: INFO: Got endpoints: latency-svc-x2842 [788.472974ms] Mar 24 14:01:15.573: INFO: Created: latency-svc-5snz9 Mar 24 14:01:15.577: INFO: Got endpoints: latency-svc-5snz9 [815.250928ms] Mar 24 14:01:15.602: INFO: Created: latency-svc-fljdw Mar 24 14:01:15.614: INFO: Got endpoints: latency-svc-fljdw [786.639282ms] Mar 24 14:01:15.632: INFO: Created: latency-svc-sz22d Mar 24 14:01:15.644: INFO: Got endpoints: latency-svc-sz22d [771.815203ms] Mar 24 14:01:15.666: INFO: Created: latency-svc-p2hf8 Mar 24 14:01:15.729: INFO: Got endpoints: latency-svc-p2hf8 [777.826317ms] Mar 24 14:01:15.730: INFO: Created: latency-svc-bhh74 Mar 24 14:01:15.734: INFO: Got endpoints: latency-svc-bhh74 [747.666642ms] Mar 24 14:01:15.762: INFO: Created: latency-svc-cd45m Mar 24 14:01:15.777: INFO: Got endpoints: latency-svc-cd45m [760.273041ms] Mar 24 14:01:15.794: INFO: Created: latency-svc-b7bzm Mar 24 14:01:15.878: INFO: Got endpoints: latency-svc-b7bzm [776.078141ms] Mar 24 14:01:15.880: INFO: Created: latency-svc-xmqdb Mar 24 14:01:15.898: INFO: Got endpoints: latency-svc-xmqdb [703.555931ms] Mar 24 14:01:15.918: INFO: Created: latency-svc-h7lv8 Mar 24 14:01:15.934: INFO: Got endpoints: latency-svc-h7lv8 [680.860652ms] Mar 24 14:01:15.954: INFO: Created: latency-svc-7ncq4 Mar 24 14:01:15.970: INFO: Got endpoints: latency-svc-7ncq4 [682.219408ms] Mar 24 14:01:16.016: INFO: Created: latency-svc-cskpg Mar 24 14:01:16.018: INFO: Got endpoints: latency-svc-cskpg [700.37134ms] Mar 24 14:01:16.064: INFO: Created: latency-svc-ttzzh Mar 24 14:01:16.072: INFO: Got endpoints: latency-svc-ttzzh [724.004456ms] Mar 24 14:01:16.106: INFO: Created: latency-svc-r95s8 Mar 24 14:01:16.115: INFO: Got endpoints: latency-svc-r95s8 [700.634503ms] Mar 24 14:01:16.171: INFO: Created: latency-svc-wwdpt Mar 24 14:01:16.178: INFO: Got endpoints: latency-svc-wwdpt [708.568346ms] Mar 24 14:01:16.209: INFO: Created: latency-svc-zsvxc Mar 24 14:01:16.236: INFO: Got endpoints: latency-svc-zsvxc [725.94576ms] Mar 24 14:01:16.303: INFO: Created: latency-svc-sn92v Mar 24 14:01:16.306: INFO: Got endpoints: latency-svc-sn92v [728.562858ms] Mar 24 14:01:16.332: INFO: Created: latency-svc-kvfgm Mar 24 14:01:16.350: INFO: Got endpoints: latency-svc-kvfgm [736.529911ms] Mar 24 14:01:16.368: INFO: Created: latency-svc-rx9lc Mar 24 14:01:16.380: INFO: Got endpoints: latency-svc-rx9lc [736.574406ms] Mar 24 14:01:16.400: INFO: Created: latency-svc-vtddn Mar 24 14:01:16.465: INFO: Got endpoints: latency-svc-vtddn [736.205688ms] Mar 24 14:01:16.466: INFO: Created: latency-svc-c4nrj Mar 24 14:01:16.477: INFO: Got endpoints: latency-svc-c4nrj [742.475027ms] Mar 24 14:01:16.530: INFO: Created: latency-svc-kgs77 Mar 24 14:01:16.554: INFO: Got endpoints: latency-svc-kgs77 [776.254899ms] Mar 24 14:01:16.627: INFO: Created: latency-svc-ltswd Mar 24 14:01:16.695: INFO: Got endpoints: latency-svc-ltswd [816.556422ms] Mar 24 14:01:16.724: INFO: Created: latency-svc-gvflf Mar 24 14:01:16.758: INFO: Got endpoints: latency-svc-gvflf [860.485236ms] Mar 24 14:01:16.772: INFO: Created: latency-svc-rzl9h Mar 24 14:01:16.790: INFO: Got endpoints: latency-svc-rzl9h [855.513424ms] Mar 24 14:01:16.812: INFO: Created: latency-svc-89ghd Mar 24 14:01:16.832: INFO: Got endpoints: latency-svc-89ghd [861.529106ms] Mar 24 14:01:16.854: INFO: Created: latency-svc-r9vxv Mar 24 14:01:16.944: INFO: Got endpoints: latency-svc-r9vxv [925.809947ms] Mar 24 14:01:16.948: INFO: Created: latency-svc-hvq4m Mar 24 14:01:16.964: INFO: Got endpoints: latency-svc-hvq4m [891.572412ms] Mar 24 14:01:17.000: INFO: Created: latency-svc-6hb9w Mar 24 14:01:17.030: INFO: Got endpoints: latency-svc-6hb9w [915.498135ms] Mar 24 14:01:17.094: INFO: Created: latency-svc-vc2wl Mar 24 14:01:17.096: INFO: Got endpoints: latency-svc-vc2wl [918.472855ms] Mar 24 14:01:17.175: INFO: Created: latency-svc-6vgmm Mar 24 14:01:17.233: INFO: Got endpoints: latency-svc-6vgmm [997.211238ms] Mar 24 14:01:17.244: INFO: Created: latency-svc-qwm2p Mar 24 14:01:17.259: INFO: Got endpoints: latency-svc-qwm2p [952.830397ms] Mar 24 14:01:17.280: INFO: Created: latency-svc-c9rl9 Mar 24 14:01:17.289: INFO: Got endpoints: latency-svc-c9rl9 [939.011812ms] Mar 24 14:01:17.310: INFO: Created: latency-svc-psmmn Mar 24 14:01:17.320: INFO: Got endpoints: latency-svc-psmmn [938.977941ms] Mar 24 14:01:17.363: INFO: Created: latency-svc-5ndz5 Mar 24 14:01:17.366: INFO: Got endpoints: latency-svc-5ndz5 [900.937663ms] Mar 24 14:01:17.408: INFO: Created: latency-svc-vm96m Mar 24 14:01:17.423: INFO: Got endpoints: latency-svc-vm96m [945.677341ms] Mar 24 14:01:17.444: INFO: Created: latency-svc-wpskc Mar 24 14:01:17.483: INFO: Got endpoints: latency-svc-wpskc [928.936496ms] Mar 24 14:01:17.520: INFO: Created: latency-svc-cspg8 Mar 24 14:01:17.546: INFO: Got endpoints: latency-svc-cspg8 [851.189139ms] Mar 24 14:01:17.577: INFO: Created: latency-svc-hm8rk Mar 24 14:01:17.651: INFO: Got endpoints: latency-svc-hm8rk [892.424165ms] Mar 24 14:01:17.654: INFO: Created: latency-svc-xk7xl Mar 24 14:01:17.670: INFO: Got endpoints: latency-svc-xk7xl [879.856622ms] Mar 24 14:01:17.694: INFO: Created: latency-svc-dqp4q Mar 24 14:01:17.706: INFO: Got endpoints: latency-svc-dqp4q [874.209392ms] Mar 24 14:01:17.738: INFO: Created: latency-svc-ffrv7 Mar 24 14:01:17.818: INFO: Got endpoints: latency-svc-ffrv7 [873.684428ms] Mar 24 14:01:17.820: INFO: Created: latency-svc-nh98m Mar 24 14:01:17.826: INFO: Got endpoints: latency-svc-nh98m [861.814196ms] Mar 24 14:01:17.846: INFO: Created: latency-svc-mnp44 Mar 24 14:01:17.862: INFO: Got endpoints: latency-svc-mnp44 [831.888554ms] Mar 24 14:01:17.887: INFO: Created: latency-svc-bn982 Mar 24 14:01:17.910: INFO: Got endpoints: latency-svc-bn982 [813.992214ms] Mar 24 14:01:17.968: INFO: Created: latency-svc-4v2cr Mar 24 14:01:17.971: INFO: Got endpoints: latency-svc-4v2cr [738.016958ms] Mar 24 14:01:18.002: INFO: Created: latency-svc-m248p Mar 24 14:01:18.019: INFO: Got endpoints: latency-svc-m248p [760.327769ms] Mar 24 14:01:18.038: INFO: Created: latency-svc-lv5ln Mar 24 14:01:18.056: INFO: Got endpoints: latency-svc-lv5ln [766.231382ms] Mar 24 14:01:18.118: INFO: Created: latency-svc-46nj9 Mar 24 14:01:18.121: INFO: Got endpoints: latency-svc-46nj9 [801.152786ms] Mar 24 14:01:18.150: INFO: Created: latency-svc-ppr62 Mar 24 14:01:18.164: INFO: Got endpoints: latency-svc-ppr62 [798.33162ms] Mar 24 14:01:18.189: INFO: Created: latency-svc-x4v4g Mar 24 14:01:18.206: INFO: Got endpoints: latency-svc-x4v4g [783.798472ms] Mar 24 14:01:18.262: INFO: Created: latency-svc-r485n Mar 24 14:01:18.266: INFO: Got endpoints: latency-svc-r485n [782.65555ms] Mar 24 14:01:18.312: INFO: Created: latency-svc-5qnnq Mar 24 14:01:18.321: INFO: Got endpoints: latency-svc-5qnnq [774.418573ms] Mar 24 14:01:18.342: INFO: Created: latency-svc-2f7j9 Mar 24 14:01:18.351: INFO: Got endpoints: latency-svc-2f7j9 [700.141907ms] Mar 24 14:01:18.399: INFO: Created: latency-svc-xgrps Mar 24 14:01:18.422: INFO: Got endpoints: latency-svc-xgrps [752.248463ms] Mar 24 14:01:18.464: INFO: Created: latency-svc-27q2b Mar 24 14:01:18.490: INFO: Got endpoints: latency-svc-27q2b [783.998309ms] Mar 24 14:01:18.537: INFO: Created: latency-svc-wzh95 Mar 24 14:01:18.552: INFO: Got endpoints: latency-svc-wzh95 [733.519679ms] Mar 24 14:01:18.576: INFO: Created: latency-svc-4sncv Mar 24 14:01:18.586: INFO: Got endpoints: latency-svc-4sncv [760.16341ms] Mar 24 14:01:18.614: INFO: Created: latency-svc-kpzjm Mar 24 14:01:18.629: INFO: Got endpoints: latency-svc-kpzjm [766.504173ms] Mar 24 14:01:18.699: INFO: Created: latency-svc-nm8r7 Mar 24 14:01:18.702: INFO: Got endpoints: latency-svc-nm8r7 [791.975567ms] Mar 24 14:01:18.733: INFO: Created: latency-svc-22k2l Mar 24 14:01:18.749: INFO: Got endpoints: latency-svc-22k2l [777.889946ms] Mar 24 14:01:18.774: INFO: Created: latency-svc-rgz2w Mar 24 14:01:18.785: INFO: Got endpoints: latency-svc-rgz2w [765.887017ms] Mar 24 14:01:18.840: INFO: Created: latency-svc-msvf4 Mar 24 14:01:18.870: INFO: Got endpoints: latency-svc-msvf4 [814.72842ms] Mar 24 14:01:18.890: INFO: Created: latency-svc-gr6nm Mar 24 14:01:18.918: INFO: Got endpoints: latency-svc-gr6nm [797.396525ms] Mar 24 14:01:19.029: INFO: Created: latency-svc-ckr84 Mar 24 14:01:19.030: INFO: Got endpoints: latency-svc-ckr84 [866.186765ms] Mar 24 14:01:19.076: INFO: Created: latency-svc-x9qv9 Mar 24 14:01:19.092: INFO: Got endpoints: latency-svc-x9qv9 [885.868499ms] Mar 24 14:01:19.118: INFO: Created: latency-svc-r9wk8 Mar 24 14:01:19.185: INFO: Got endpoints: latency-svc-r9wk8 [919.304855ms] Mar 24 14:01:19.189: INFO: Created: latency-svc-h4fdc Mar 24 14:01:19.195: INFO: Got endpoints: latency-svc-h4fdc [874.222616ms] Mar 24 14:01:19.231: INFO: Created: latency-svc-5gl2w Mar 24 14:01:19.243: INFO: Got endpoints: latency-svc-5gl2w [891.881279ms] Mar 24 14:01:19.267: INFO: Created: latency-svc-7rzcb Mar 24 14:01:19.279: INFO: Got endpoints: latency-svc-7rzcb [857.391114ms] Mar 24 14:01:19.322: INFO: Created: latency-svc-gpv4w Mar 24 14:01:19.334: INFO: Got endpoints: latency-svc-gpv4w [843.572664ms] Mar 24 14:01:19.359: INFO: Created: latency-svc-bzbjg Mar 24 14:01:19.385: INFO: Got endpoints: latency-svc-bzbjg [833.596886ms] Mar 24 14:01:19.441: INFO: Created: latency-svc-258sv Mar 24 14:01:19.443: INFO: Got endpoints: latency-svc-258sv [857.267188ms] Mar 24 14:01:19.470: INFO: Created: latency-svc-s7nm6 Mar 24 14:01:19.479: INFO: Got endpoints: latency-svc-s7nm6 [849.764489ms] Mar 24 14:01:19.502: INFO: Created: latency-svc-sppjx Mar 24 14:01:19.515: INFO: Got endpoints: latency-svc-sppjx [813.053067ms] Mar 24 14:01:19.579: INFO: Created: latency-svc-k8wgv Mar 24 14:01:19.581: INFO: Got endpoints: latency-svc-k8wgv [832.335414ms] Mar 24 14:01:19.614: INFO: Created: latency-svc-fhnr7 Mar 24 14:01:19.630: INFO: Got endpoints: latency-svc-fhnr7 [844.343484ms] Mar 24 14:01:19.649: INFO: Created: latency-svc-dv77m Mar 24 14:01:19.667: INFO: Got endpoints: latency-svc-dv77m [796.995163ms] Mar 24 14:01:19.667: INFO: Latencies: [42.846728ms 66.928682ms 121.916016ms 162.961269ms 191.70545ms 273.469673ms 286.987569ms 331.193659ms 372.426425ms 445.290905ms 482.244953ms 524.13756ms 590.838987ms 595.236674ms 609.668283ms 610.501414ms 624.552016ms 627.955398ms 634.455176ms 634.961512ms 639.896229ms 642.285664ms 646.386349ms 647.777117ms 649.473437ms 649.83151ms 651.599573ms 652.088883ms 655.247677ms 663.614371ms 664.636557ms 666.791437ms 670.363162ms 675.673841ms 676.133095ms 676.178744ms 676.272693ms 676.356829ms 676.460966ms 677.709119ms 680.079419ms 680.860652ms 682.219408ms 684.244747ms 686.550868ms 692.536472ms 695.87864ms 700.114676ms 700.141907ms 700.37134ms 700.634503ms 703.555931ms 705.442451ms 707.401247ms 708.568346ms 712.037135ms 712.759027ms 715.106365ms 715.172037ms 717.050891ms 717.097357ms 722.584889ms 723.119707ms 723.23846ms 723.353709ms 723.959267ms 724.004456ms 724.410169ms 725.94576ms 727.641282ms 728.562858ms 728.585482ms 728.837669ms 729.073432ms 729.777473ms 733.519679ms 734.07609ms 735.190662ms 736.205688ms 736.529911ms 736.574406ms 738.016958ms 738.863088ms 739.743238ms 740.033293ms 741.022339ms 742.475027ms 744.320746ms 745.775762ms 745.963923ms 747.442391ms 747.666642ms 748.139999ms 748.419428ms 752.248463ms 752.771285ms 753.246112ms 753.524887ms 754.894845ms 756.339872ms 758.900183ms 760.16341ms 760.273041ms 760.327769ms 760.907786ms 762.545784ms 762.925857ms 765.687277ms 765.887017ms 765.951919ms 766.180436ms 766.231382ms 766.290103ms 766.504173ms 771.815203ms 774.418573ms 775.29928ms 776.078141ms 776.254899ms 777.826317ms 777.889946ms 779.500796ms 780.471138ms 781.696026ms 782.65555ms 783.798472ms 783.834921ms 783.998309ms 784.770959ms 785.537452ms 786.639282ms 788.472974ms 790.321888ms 791.975567ms 796.640687ms 796.971841ms 796.995163ms 797.396525ms 798.33162ms 801.152786ms 802.500364ms 806.937727ms 809.898486ms 813.053067ms 813.927ms 813.992214ms 814.72842ms 815.018055ms 815.250928ms 816.556422ms 824.943344ms 829.785085ms 830.150851ms 831.888554ms 832.016706ms 832.335414ms 832.404551ms 833.596886ms 835.59818ms 842.400554ms 843.572664ms 844.343484ms 844.363491ms 844.528178ms 849.764489ms 851.189139ms 855.513424ms 857.267188ms 857.391114ms 860.485236ms 861.529106ms 861.814196ms 866.186765ms 868.134967ms 873.684428ms 874.209392ms 874.222616ms 875.522269ms 879.856622ms 885.868499ms 891.572412ms 891.881279ms 892.424165ms 900.937663ms 904.179205ms 904.605121ms 914.007637ms 915.498135ms 916.418375ms 918.472855ms 919.304855ms 925.809947ms 928.936496ms 938.977941ms 939.011812ms 940.778442ms 945.677341ms 952.830397ms 954.298116ms 997.211238ms] Mar 24 14:01:19.668: INFO: 50 %ile: 758.900183ms Mar 24 14:01:19.668: INFO: 90 %ile: 891.572412ms Mar 24 14:01:19.668: INFO: 99 %ile: 954.298116ms Mar 24 14:01:19.668: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:01:19.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4892" for this suite. Mar 24 14:01:41.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:01:41.811: INFO: namespace svc-latency-4892 deletion completed in 22.132723977s • [SLOW TEST:35.627 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:01:41.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:01:41.909: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.835941ms) Mar 24 14:01:41.911: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.271483ms) Mar 24 14:01:41.914: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.240044ms) Mar 24 14:01:41.916: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.260042ms) Mar 24 14:01:41.918: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.374037ms) Mar 24 14:01:41.921: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.468908ms) Mar 24 14:01:41.923: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.280735ms) Mar 24 14:01:41.926: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.569149ms) Mar 24 14:01:41.928: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.203072ms) Mar 24 14:01:41.931: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.774321ms) Mar 24 14:01:41.933: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.548046ms) Mar 24 14:01:41.936: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.59451ms) Mar 24 14:01:41.939: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.940549ms) Mar 24 14:01:41.942: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.912609ms) Mar 24 14:01:41.945: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.974143ms) Mar 24 14:01:41.948: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.205784ms) Mar 24 14:01:41.951: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.012281ms) Mar 24 14:01:41.954: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.227596ms) Mar 24 14:01:42.005: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 50.716389ms) Mar 24 14:01:42.009: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.777276ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:01:42.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-687" for this suite. Mar 24 14:01:48.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:01:48.133: INFO: namespace proxy-687 deletion completed in 6.119841655s • [SLOW TEST:6.321 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:01:48.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 24 14:01:48.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8251' Mar 24 14:01:50.960: INFO: stderr: "" Mar 24 14:01:50.960: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 14:01:50.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8251' Mar 24 14:01:51.071: INFO: stderr: "" Mar 24 14:01:51.071: INFO: stdout: "update-demo-nautilus-hwhr7 update-demo-nautilus-vjmzq " Mar 24 14:01:51.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hwhr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:01:51.161: INFO: stderr: "" Mar 24 14:01:51.161: INFO: stdout: "" Mar 24 14:01:51.161: INFO: update-demo-nautilus-hwhr7 is created but not running Mar 24 14:01:56.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8251' Mar 24 14:01:56.268: INFO: stderr: "" Mar 24 14:01:56.268: INFO: stdout: "update-demo-nautilus-hwhr7 update-demo-nautilus-vjmzq " Mar 24 14:01:56.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hwhr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:01:56.360: INFO: stderr: "" Mar 24 14:01:56.361: INFO: stdout: "true" Mar 24 14:01:56.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hwhr7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:01:56.449: INFO: stderr: "" Mar 24 14:01:56.449: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 14:01:56.449: INFO: validating pod update-demo-nautilus-hwhr7 Mar 24 14:01:56.452: INFO: got data: { "image": "nautilus.jpg" } Mar 24 14:01:56.452: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 14:01:56.452: INFO: update-demo-nautilus-hwhr7 is verified up and running Mar 24 14:01:56.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjmzq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:01:56.545: INFO: stderr: "" Mar 24 14:01:56.545: INFO: stdout: "true" Mar 24 14:01:56.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjmzq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:01:56.645: INFO: stderr: "" Mar 24 14:01:56.645: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 14:01:56.645: INFO: validating pod update-demo-nautilus-vjmzq Mar 24 14:01:56.650: INFO: got data: { "image": "nautilus.jpg" } Mar 24 14:01:56.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 14:01:56.650: INFO: update-demo-nautilus-vjmzq is verified up and running STEP: rolling-update to new replication controller Mar 24 14:01:56.652: INFO: scanned /root for discovery docs: Mar 24 14:01:56.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8251' Mar 24 14:02:19.278: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 24 14:02:19.278: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 14:02:19.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8251' Mar 24 14:02:19.374: INFO: stderr: "" Mar 24 14:02:19.374: INFO: stdout: "update-demo-kitten-5w4xz update-demo-kitten-rcp5f " Mar 24 14:02:19.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5w4xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:02:19.468: INFO: stderr: "" Mar 24 14:02:19.468: INFO: stdout: "true" Mar 24 14:02:19.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5w4xz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:02:19.561: INFO: stderr: "" Mar 24 14:02:19.561: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 24 14:02:19.561: INFO: validating pod update-demo-kitten-5w4xz Mar 24 14:02:19.564: INFO: got data: { "image": "kitten.jpg" } Mar 24 14:02:19.565: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 24 14:02:19.565: INFO: update-demo-kitten-5w4xz is verified up and running Mar 24 14:02:19.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rcp5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:02:19.656: INFO: stderr: "" Mar 24 14:02:19.656: INFO: stdout: "true" Mar 24 14:02:19.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rcp5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8251' Mar 24 14:02:19.741: INFO: stderr: "" Mar 24 14:02:19.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 24 14:02:19.741: INFO: validating pod update-demo-kitten-rcp5f Mar 24 14:02:19.745: INFO: got data: { "image": "kitten.jpg" } Mar 24 14:02:19.745: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 24 14:02:19.745: INFO: update-demo-kitten-rcp5f is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:02:19.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8251" for this suite. Mar 24 14:02:43.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:02:43.830: INFO: namespace kubectl-8251 deletion completed in 24.082401042s • [SLOW TEST:55.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:02:43.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7608.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7608.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7608.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.72.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.72.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.72.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.72.48_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7608.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7608.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7608.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7608.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7608.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.72.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.72.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.72.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.72.48_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 14:02:49.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:49.993: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:49.996: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:49.998: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:50.018: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:50.021: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:50.024: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:50.027: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:50.047: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:02:55.052: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.078: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.081: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.084: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.087: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:02:55.106: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:03:00.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.084: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.087: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.093: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:00.113: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:03:05.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.056: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.059: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.062: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.085: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.088: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.091: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.094: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:05.114: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:03:10.052: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.086: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.089: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.093: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:10.116: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:03:15.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.064: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.087: INFO: Unable to read jessie_udp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.090: INFO: Unable to read jessie_tcp@dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.093: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local from pod dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca: the server could not find the requested resource (get pods dns-test-e67a6517-ae76-4c00-821c-a954653241ca) Mar 24 14:03:15.114: INFO: Lookups using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca failed for: [wheezy_udp@dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@dns-test-service.dns-7608.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_udp@dns-test-service.dns-7608.svc.cluster.local jessie_tcp@dns-test-service.dns-7608.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7608.svc.cluster.local] Mar 24 14:03:20.101: INFO: DNS probes using dns-7608/dns-test-e67a6517-ae76-4c00-821c-a954653241ca succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:03:20.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7608" for this suite. Mar 24 14:03:26.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:03:26.867: INFO: namespace dns-7608 deletion completed in 6.248741339s • [SLOW TEST:43.037 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:03:26.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 24 14:03:37.039: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.039: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.073643 6 log.go:172] (0xc00140cb00) (0xc001d755e0) Create stream I0324 14:03:37.073675 6 log.go:172] (0xc00140cb00) (0xc001d755e0) Stream added, broadcasting: 1 I0324 14:03:37.075753 6 log.go:172] (0xc00140cb00) Reply frame received for 1 I0324 14:03:37.075799 6 log.go:172] (0xc00140cb00) (0xc000566c80) Create stream I0324 14:03:37.075812 6 log.go:172] (0xc00140cb00) (0xc000566c80) Stream added, broadcasting: 3 I0324 14:03:37.076835 6 log.go:172] (0xc00140cb00) Reply frame received for 3 I0324 14:03:37.076863 6 log.go:172] (0xc00140cb00) (0xc0009da320) Create stream I0324 14:03:37.076872 6 log.go:172] (0xc00140cb00) (0xc0009da320) Stream added, broadcasting: 5 I0324 14:03:37.077999 6 log.go:172] (0xc00140cb00) Reply frame received for 5 I0324 14:03:37.150121 6 log.go:172] (0xc00140cb00) Data frame received for 5 I0324 14:03:37.150179 6 log.go:172] (0xc0009da320) (5) Data frame handling I0324 14:03:37.150219 6 log.go:172] (0xc00140cb00) Data frame received for 3 I0324 14:03:37.150237 6 log.go:172] (0xc000566c80) (3) Data frame handling I0324 14:03:37.150259 6 log.go:172] (0xc000566c80) (3) Data frame sent I0324 14:03:37.150274 6 log.go:172] (0xc00140cb00) Data frame received for 3 I0324 14:03:37.150287 6 log.go:172] (0xc000566c80) (3) Data frame handling I0324 14:03:37.151506 6 log.go:172] (0xc00140cb00) Data frame received for 1 I0324 14:03:37.151521 6 log.go:172] (0xc001d755e0) (1) Data frame handling I0324 14:03:37.151532 6 log.go:172] (0xc001d755e0) (1) Data frame sent I0324 14:03:37.151548 6 log.go:172] (0xc00140cb00) (0xc001d755e0) Stream removed, broadcasting: 1 I0324 14:03:37.151619 6 log.go:172] (0xc00140cb00) (0xc001d755e0) Stream removed, broadcasting: 1 I0324 14:03:37.151631 6 log.go:172] (0xc00140cb00) (0xc000566c80) Stream removed, broadcasting: 3 I0324 14:03:37.151688 6 log.go:172] (0xc00140cb00) Go away received I0324 14:03:37.151765 6 log.go:172] (0xc00140cb00) (0xc0009da320) Stream removed, broadcasting: 5 Mar 24 14:03:37.151: INFO: Exec stderr: "" Mar 24 14:03:37.151: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.151: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.187450 6 log.go:172] (0xc0001fd760) (0xc000567400) Create stream I0324 14:03:37.187479 6 log.go:172] (0xc0001fd760) (0xc000567400) Stream added, broadcasting: 1 I0324 14:03:37.190140 6 log.go:172] (0xc0001fd760) Reply frame received for 1 I0324 14:03:37.190191 6 log.go:172] (0xc0001fd760) (0xc0009da3c0) Create stream I0324 14:03:37.190205 6 log.go:172] (0xc0001fd760) (0xc0009da3c0) Stream added, broadcasting: 3 I0324 14:03:37.191085 6 log.go:172] (0xc0001fd760) Reply frame received for 3 I0324 14:03:37.191126 6 log.go:172] (0xc0001fd760) (0xc001d75680) Create stream I0324 14:03:37.191142 6 log.go:172] (0xc0001fd760) (0xc001d75680) Stream added, broadcasting: 5 I0324 14:03:37.191982 6 log.go:172] (0xc0001fd760) Reply frame received for 5 I0324 14:03:37.247477 6 log.go:172] (0xc0001fd760) Data frame received for 5 I0324 14:03:37.247521 6 log.go:172] (0xc0001fd760) Data frame received for 3 I0324 14:03:37.247546 6 log.go:172] (0xc0009da3c0) (3) Data frame handling I0324 14:03:37.247572 6 log.go:172] (0xc0009da3c0) (3) Data frame sent I0324 14:03:37.247604 6 log.go:172] (0xc0001fd760) Data frame received for 3 I0324 14:03:37.247630 6 log.go:172] (0xc0009da3c0) (3) Data frame handling I0324 14:03:37.247664 6 log.go:172] (0xc001d75680) (5) Data frame handling I0324 14:03:37.249652 6 log.go:172] (0xc0001fd760) Data frame received for 1 I0324 14:03:37.249672 6 log.go:172] (0xc000567400) (1) Data frame handling I0324 14:03:37.249684 6 log.go:172] (0xc000567400) (1) Data frame sent I0324 14:03:37.249691 6 log.go:172] (0xc0001fd760) (0xc000567400) Stream removed, broadcasting: 1 I0324 14:03:37.249774 6 log.go:172] (0xc0001fd760) (0xc000567400) Stream removed, broadcasting: 1 I0324 14:03:37.249784 6 log.go:172] (0xc0001fd760) (0xc0009da3c0) Stream removed, broadcasting: 3 I0324 14:03:37.249790 6 log.go:172] (0xc0001fd760) (0xc001d75680) Stream removed, broadcasting: 5 Mar 24 14:03:37.249: INFO: Exec stderr: "" I0324 14:03:37.249824 6 log.go:172] (0xc0001fd760) Go away received Mar 24 14:03:37.249: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.249: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.285266 6 log.go:172] (0xc00140da20) (0xc001d75f40) Create stream I0324 14:03:37.285295 6 log.go:172] (0xc00140da20) (0xc001d75f40) Stream added, broadcasting: 1 I0324 14:03:37.288047 6 log.go:172] (0xc00140da20) Reply frame received for 1 I0324 14:03:37.288098 6 log.go:172] (0xc00140da20) (0xc0009da500) Create stream I0324 14:03:37.288114 6 log.go:172] (0xc00140da20) (0xc0009da500) Stream added, broadcasting: 3 I0324 14:03:37.289275 6 log.go:172] (0xc00140da20) Reply frame received for 3 I0324 14:03:37.289322 6 log.go:172] (0xc00140da20) (0xc0009da640) Create stream I0324 14:03:37.289336 6 log.go:172] (0xc00140da20) (0xc0009da640) Stream added, broadcasting: 5 I0324 14:03:37.290437 6 log.go:172] (0xc00140da20) Reply frame received for 5 I0324 14:03:37.354163 6 log.go:172] (0xc00140da20) Data frame received for 5 I0324 14:03:37.354197 6 log.go:172] (0xc0009da640) (5) Data frame handling I0324 14:03:37.354230 6 log.go:172] (0xc00140da20) Data frame received for 3 I0324 14:03:37.354241 6 log.go:172] (0xc0009da500) (3) Data frame handling I0324 14:03:37.354249 6 log.go:172] (0xc0009da500) (3) Data frame sent I0324 14:03:37.354265 6 log.go:172] (0xc00140da20) Data frame received for 3 I0324 14:03:37.354276 6 log.go:172] (0xc0009da500) (3) Data frame handling I0324 14:03:37.355470 6 log.go:172] (0xc00140da20) Data frame received for 1 I0324 14:03:37.355514 6 log.go:172] (0xc001d75f40) (1) Data frame handling I0324 14:03:37.355551 6 log.go:172] (0xc001d75f40) (1) Data frame sent I0324 14:03:37.355584 6 log.go:172] (0xc00140da20) (0xc001d75f40) Stream removed, broadcasting: 1 I0324 14:03:37.355605 6 log.go:172] (0xc00140da20) Go away received I0324 14:03:37.355791 6 log.go:172] (0xc00140da20) (0xc001d75f40) Stream removed, broadcasting: 1 I0324 14:03:37.355829 6 log.go:172] (0xc00140da20) (0xc0009da500) Stream removed, broadcasting: 3 I0324 14:03:37.355855 6 log.go:172] (0xc00140da20) (0xc0009da640) Stream removed, broadcasting: 5 Mar 24 14:03:37.355: INFO: Exec stderr: "" Mar 24 14:03:37.355: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.355: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.386251 6 log.go:172] (0xc000fa26e0) (0xc0030c0c80) Create stream I0324 14:03:37.386278 6 log.go:172] (0xc000fa26e0) (0xc0030c0c80) Stream added, broadcasting: 1 I0324 14:03:37.388467 6 log.go:172] (0xc000fa26e0) Reply frame received for 1 I0324 14:03:37.388509 6 log.go:172] (0xc000fa26e0) (0xc0005674a0) Create stream I0324 14:03:37.388522 6 log.go:172] (0xc000fa26e0) (0xc0005674a0) Stream added, broadcasting: 3 I0324 14:03:37.389500 6 log.go:172] (0xc000fa26e0) Reply frame received for 3 I0324 14:03:37.389563 6 log.go:172] (0xc000fa26e0) (0xc00149e000) Create stream I0324 14:03:37.389587 6 log.go:172] (0xc000fa26e0) (0xc00149e000) Stream added, broadcasting: 5 I0324 14:03:37.390338 6 log.go:172] (0xc000fa26e0) Reply frame received for 5 I0324 14:03:37.456198 6 log.go:172] (0xc000fa26e0) Data frame received for 3 I0324 14:03:37.456255 6 log.go:172] (0xc0005674a0) (3) Data frame handling I0324 14:03:37.456277 6 log.go:172] (0xc0005674a0) (3) Data frame sent I0324 14:03:37.456298 6 log.go:172] (0xc000fa26e0) Data frame received for 3 I0324 14:03:37.456317 6 log.go:172] (0xc0005674a0) (3) Data frame handling I0324 14:03:37.456349 6 log.go:172] (0xc000fa26e0) Data frame received for 5 I0324 14:03:37.456366 6 log.go:172] (0xc00149e000) (5) Data frame handling I0324 14:03:37.458061 6 log.go:172] (0xc000fa26e0) Data frame received for 1 I0324 14:03:37.458072 6 log.go:172] (0xc0030c0c80) (1) Data frame handling I0324 14:03:37.458084 6 log.go:172] (0xc0030c0c80) (1) Data frame sent I0324 14:03:37.458097 6 log.go:172] (0xc000fa26e0) (0xc0030c0c80) Stream removed, broadcasting: 1 I0324 14:03:37.458183 6 log.go:172] (0xc000fa26e0) (0xc0030c0c80) Stream removed, broadcasting: 1 I0324 14:03:37.458204 6 log.go:172] (0xc000fa26e0) (0xc0005674a0) Stream removed, broadcasting: 3 I0324 14:03:37.458256 6 log.go:172] (0xc000fa26e0) (0xc00149e000) Stream removed, broadcasting: 5 Mar 24 14:03:37.458: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0324 14:03:37.458300 6 log.go:172] (0xc000fa26e0) Go away received Mar 24 14:03:37.458: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.458: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.484493 6 log.go:172] (0xc00218ca50) (0xc000567c20) Create stream I0324 14:03:37.484509 6 log.go:172] (0xc00218ca50) (0xc000567c20) Stream added, broadcasting: 1 I0324 14:03:37.486978 6 log.go:172] (0xc00218ca50) Reply frame received for 1 I0324 14:03:37.487019 6 log.go:172] (0xc00218ca50) (0xc00149e140) Create stream I0324 14:03:37.487035 6 log.go:172] (0xc00218ca50) (0xc00149e140) Stream added, broadcasting: 3 I0324 14:03:37.487900 6 log.go:172] (0xc00218ca50) Reply frame received for 3 I0324 14:03:37.488014 6 log.go:172] (0xc00218ca50) (0xc0030c0e60) Create stream I0324 14:03:37.488023 6 log.go:172] (0xc00218ca50) (0xc0030c0e60) Stream added, broadcasting: 5 I0324 14:03:37.488998 6 log.go:172] (0xc00218ca50) Reply frame received for 5 I0324 14:03:37.548517 6 log.go:172] (0xc00218ca50) Data frame received for 3 I0324 14:03:37.548548 6 log.go:172] (0xc00149e140) (3) Data frame handling I0324 14:03:37.548568 6 log.go:172] (0xc00149e140) (3) Data frame sent I0324 14:03:37.548585 6 log.go:172] (0xc00218ca50) Data frame received for 3 I0324 14:03:37.548598 6 log.go:172] (0xc00149e140) (3) Data frame handling I0324 14:03:37.548690 6 log.go:172] (0xc00218ca50) Data frame received for 5 I0324 14:03:37.548722 6 log.go:172] (0xc0030c0e60) (5) Data frame handling I0324 14:03:37.550692 6 log.go:172] (0xc00218ca50) Data frame received for 1 I0324 14:03:37.550716 6 log.go:172] (0xc000567c20) (1) Data frame handling I0324 14:03:37.550729 6 log.go:172] (0xc000567c20) (1) Data frame sent I0324 14:03:37.550746 6 log.go:172] (0xc00218ca50) (0xc000567c20) Stream removed, broadcasting: 1 I0324 14:03:37.550786 6 log.go:172] (0xc00218ca50) Go away received I0324 14:03:37.550830 6 log.go:172] (0xc00218ca50) (0xc000567c20) Stream removed, broadcasting: 1 I0324 14:03:37.550850 6 log.go:172] (0xc00218ca50) (0xc00149e140) Stream removed, broadcasting: 3 I0324 14:03:37.550862 6 log.go:172] (0xc00218ca50) (0xc0030c0e60) Stream removed, broadcasting: 5 Mar 24 14:03:37.550: INFO: Exec stderr: "" Mar 24 14:03:37.550: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.550: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.583482 6 log.go:172] (0xc000fa38c0) (0xc0030c14a0) Create stream I0324 14:03:37.583514 6 log.go:172] (0xc000fa38c0) (0xc0030c14a0) Stream added, broadcasting: 1 I0324 14:03:37.588562 6 log.go:172] (0xc000fa38c0) Reply frame received for 1 I0324 14:03:37.588626 6 log.go:172] (0xc000fa38c0) (0xc00149e1e0) Create stream I0324 14:03:37.588649 6 log.go:172] (0xc000fa38c0) (0xc00149e1e0) Stream added, broadcasting: 3 I0324 14:03:37.590955 6 log.go:172] (0xc000fa38c0) Reply frame received for 3 I0324 14:03:37.591083 6 log.go:172] (0xc000fa38c0) (0xc002001b80) Create stream I0324 14:03:37.591157 6 log.go:172] (0xc000fa38c0) (0xc002001b80) Stream added, broadcasting: 5 I0324 14:03:37.592519 6 log.go:172] (0xc000fa38c0) Reply frame received for 5 I0324 14:03:37.652424 6 log.go:172] (0xc000fa38c0) Data frame received for 5 I0324 14:03:37.652472 6 log.go:172] (0xc002001b80) (5) Data frame handling I0324 14:03:37.652497 6 log.go:172] (0xc000fa38c0) Data frame received for 3 I0324 14:03:37.652510 6 log.go:172] (0xc00149e1e0) (3) Data frame handling I0324 14:03:37.652525 6 log.go:172] (0xc00149e1e0) (3) Data frame sent I0324 14:03:37.652538 6 log.go:172] (0xc000fa38c0) Data frame received for 3 I0324 14:03:37.652550 6 log.go:172] (0xc00149e1e0) (3) Data frame handling I0324 14:03:37.654205 6 log.go:172] (0xc000fa38c0) Data frame received for 1 I0324 14:03:37.654237 6 log.go:172] (0xc0030c14a0) (1) Data frame handling I0324 14:03:37.654251 6 log.go:172] (0xc0030c14a0) (1) Data frame sent I0324 14:03:37.654322 6 log.go:172] (0xc000fa38c0) (0xc0030c14a0) Stream removed, broadcasting: 1 I0324 14:03:37.654364 6 log.go:172] (0xc000fa38c0) Go away received I0324 14:03:37.654478 6 log.go:172] (0xc000fa38c0) (0xc0030c14a0) Stream removed, broadcasting: 1 I0324 14:03:37.654518 6 log.go:172] (0xc000fa38c0) (0xc00149e1e0) Stream removed, broadcasting: 3 I0324 14:03:37.654549 6 log.go:172] (0xc000fa38c0) (0xc002001b80) Stream removed, broadcasting: 5 Mar 24 14:03:37.654: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 24 14:03:37.654: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.654: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.689778 6 log.go:172] (0xc0028d4210) (0xc0030c1900) Create stream I0324 14:03:37.689824 6 log.go:172] (0xc0028d4210) (0xc0030c1900) Stream added, broadcasting: 1 I0324 14:03:37.691889 6 log.go:172] (0xc0028d4210) Reply frame received for 1 I0324 14:03:37.691936 6 log.go:172] (0xc0028d4210) (0xc00149e280) Create stream I0324 14:03:37.691957 6 log.go:172] (0xc0028d4210) (0xc00149e280) Stream added, broadcasting: 3 I0324 14:03:37.692715 6 log.go:172] (0xc0028d4210) Reply frame received for 3 I0324 14:03:37.692744 6 log.go:172] (0xc0028d4210) (0xc002001c20) Create stream I0324 14:03:37.692760 6 log.go:172] (0xc0028d4210) (0xc002001c20) Stream added, broadcasting: 5 I0324 14:03:37.693826 6 log.go:172] (0xc0028d4210) Reply frame received for 5 I0324 14:03:37.746297 6 log.go:172] (0xc0028d4210) Data frame received for 3 I0324 14:03:37.746342 6 log.go:172] (0xc00149e280) (3) Data frame handling I0324 14:03:37.746353 6 log.go:172] (0xc00149e280) (3) Data frame sent I0324 14:03:37.746364 6 log.go:172] (0xc0028d4210) Data frame received for 3 I0324 14:03:37.746373 6 log.go:172] (0xc00149e280) (3) Data frame handling I0324 14:03:37.746397 6 log.go:172] (0xc0028d4210) Data frame received for 5 I0324 14:03:37.746408 6 log.go:172] (0xc002001c20) (5) Data frame handling I0324 14:03:37.747631 6 log.go:172] (0xc0028d4210) Data frame received for 1 I0324 14:03:37.747656 6 log.go:172] (0xc0030c1900) (1) Data frame handling I0324 14:03:37.747684 6 log.go:172] (0xc0030c1900) (1) Data frame sent I0324 14:03:37.747717 6 log.go:172] (0xc0028d4210) (0xc0030c1900) Stream removed, broadcasting: 1 I0324 14:03:37.747825 6 log.go:172] (0xc0028d4210) Go away received I0324 14:03:37.747860 6 log.go:172] (0xc0028d4210) (0xc0030c1900) Stream removed, broadcasting: 1 I0324 14:03:37.747882 6 log.go:172] (0xc0028d4210) (0xc00149e280) Stream removed, broadcasting: 3 I0324 14:03:37.747896 6 log.go:172] (0xc0028d4210) (0xc002001c20) Stream removed, broadcasting: 5 Mar 24 14:03:37.747: INFO: Exec stderr: "" Mar 24 14:03:37.747: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.747: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.789655 6 log.go:172] (0xc0023bb8c0) (0xc00149e5a0) Create stream I0324 14:03:37.789679 6 log.go:172] (0xc0023bb8c0) (0xc00149e5a0) Stream added, broadcasting: 1 I0324 14:03:37.792272 6 log.go:172] (0xc0023bb8c0) Reply frame received for 1 I0324 14:03:37.792336 6 log.go:172] (0xc0023bb8c0) (0xc0030c19a0) Create stream I0324 14:03:37.792363 6 log.go:172] (0xc0023bb8c0) (0xc0030c19a0) Stream added, broadcasting: 3 I0324 14:03:37.793532 6 log.go:172] (0xc0023bb8c0) Reply frame received for 3 I0324 14:03:37.793576 6 log.go:172] (0xc0023bb8c0) (0xc0009da8c0) Create stream I0324 14:03:37.793591 6 log.go:172] (0xc0023bb8c0) (0xc0009da8c0) Stream added, broadcasting: 5 I0324 14:03:37.794516 6 log.go:172] (0xc0023bb8c0) Reply frame received for 5 I0324 14:03:37.852789 6 log.go:172] (0xc0023bb8c0) Data frame received for 5 I0324 14:03:37.852825 6 log.go:172] (0xc0009da8c0) (5) Data frame handling I0324 14:03:37.852852 6 log.go:172] (0xc0023bb8c0) Data frame received for 3 I0324 14:03:37.852864 6 log.go:172] (0xc0030c19a0) (3) Data frame handling I0324 14:03:37.852873 6 log.go:172] (0xc0030c19a0) (3) Data frame sent I0324 14:03:37.852883 6 log.go:172] (0xc0023bb8c0) Data frame received for 3 I0324 14:03:37.852890 6 log.go:172] (0xc0030c19a0) (3) Data frame handling I0324 14:03:37.854459 6 log.go:172] (0xc0023bb8c0) Data frame received for 1 I0324 14:03:37.854490 6 log.go:172] (0xc00149e5a0) (1) Data frame handling I0324 14:03:37.854509 6 log.go:172] (0xc00149e5a0) (1) Data frame sent I0324 14:03:37.854524 6 log.go:172] (0xc0023bb8c0) (0xc00149e5a0) Stream removed, broadcasting: 1 I0324 14:03:37.854542 6 log.go:172] (0xc0023bb8c0) Go away received I0324 14:03:37.854629 6 log.go:172] (0xc0023bb8c0) (0xc00149e5a0) Stream removed, broadcasting: 1 I0324 14:03:37.854651 6 log.go:172] (0xc0023bb8c0) (0xc0030c19a0) Stream removed, broadcasting: 3 I0324 14:03:37.854665 6 log.go:172] (0xc0023bb8c0) (0xc0009da8c0) Stream removed, broadcasting: 5 Mar 24 14:03:37.854: INFO: Exec stderr: "" Mar 24 14:03:37.854: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.854: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.880281 6 log.go:172] (0xc00218d6b0) (0xc002f4c0a0) Create stream I0324 14:03:37.880312 6 log.go:172] (0xc00218d6b0) (0xc002f4c0a0) Stream added, broadcasting: 1 I0324 14:03:37.883087 6 log.go:172] (0xc00218d6b0) Reply frame received for 1 I0324 14:03:37.883139 6 log.go:172] (0xc00218d6b0) (0xc00149e640) Create stream I0324 14:03:37.883157 6 log.go:172] (0xc00218d6b0) (0xc00149e640) Stream added, broadcasting: 3 I0324 14:03:37.884194 6 log.go:172] (0xc00218d6b0) Reply frame received for 3 I0324 14:03:37.884224 6 log.go:172] (0xc00218d6b0) (0xc002001ea0) Create stream I0324 14:03:37.884236 6 log.go:172] (0xc00218d6b0) (0xc002001ea0) Stream added, broadcasting: 5 I0324 14:03:37.885212 6 log.go:172] (0xc00218d6b0) Reply frame received for 5 I0324 14:03:37.948480 6 log.go:172] (0xc00218d6b0) Data frame received for 5 I0324 14:03:37.948514 6 log.go:172] (0xc002001ea0) (5) Data frame handling I0324 14:03:37.948532 6 log.go:172] (0xc00218d6b0) Data frame received for 3 I0324 14:03:37.948542 6 log.go:172] (0xc00149e640) (3) Data frame handling I0324 14:03:37.948555 6 log.go:172] (0xc00149e640) (3) Data frame sent I0324 14:03:37.948565 6 log.go:172] (0xc00218d6b0) Data frame received for 3 I0324 14:03:37.948571 6 log.go:172] (0xc00149e640) (3) Data frame handling I0324 14:03:37.949788 6 log.go:172] (0xc00218d6b0) Data frame received for 1 I0324 14:03:37.949806 6 log.go:172] (0xc002f4c0a0) (1) Data frame handling I0324 14:03:37.949814 6 log.go:172] (0xc002f4c0a0) (1) Data frame sent I0324 14:03:37.949828 6 log.go:172] (0xc00218d6b0) (0xc002f4c0a0) Stream removed, broadcasting: 1 I0324 14:03:37.949883 6 log.go:172] (0xc00218d6b0) Go away received I0324 14:03:37.949929 6 log.go:172] (0xc00218d6b0) (0xc002f4c0a0) Stream removed, broadcasting: 1 I0324 14:03:37.949952 6 log.go:172] (0xc00218d6b0) (0xc00149e640) Stream removed, broadcasting: 3 I0324 14:03:37.949968 6 log.go:172] (0xc00218d6b0) (0xc002001ea0) Stream removed, broadcasting: 5 Mar 24 14:03:37.949: INFO: Exec stderr: "" Mar 24 14:03:37.950: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8172 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:03:37.950: INFO: >>> kubeConfig: /root/.kube/config I0324 14:03:37.984629 6 log.go:172] (0xc00259b1e0) (0xc001b2c1e0) Create stream I0324 14:03:37.984653 6 log.go:172] (0xc00259b1e0) (0xc001b2c1e0) Stream added, broadcasting: 1 I0324 14:03:37.986915 6 log.go:172] (0xc00259b1e0) Reply frame received for 1 I0324 14:03:37.986956 6 log.go:172] (0xc00259b1e0) (0xc001b2c280) Create stream I0324 14:03:37.986975 6 log.go:172] (0xc00259b1e0) (0xc001b2c280) Stream added, broadcasting: 3 I0324 14:03:37.987940 6 log.go:172] (0xc00259b1e0) Reply frame received for 3 I0324 14:03:37.987978 6 log.go:172] (0xc00259b1e0) (0xc001b2c3c0) Create stream I0324 14:03:37.987996 6 log.go:172] (0xc00259b1e0) (0xc001b2c3c0) Stream added, broadcasting: 5 I0324 14:03:37.988885 6 log.go:172] (0xc00259b1e0) Reply frame received for 5 I0324 14:03:38.056922 6 log.go:172] (0xc00259b1e0) Data frame received for 3 I0324 14:03:38.056948 6 log.go:172] (0xc001b2c280) (3) Data frame handling I0324 14:03:38.056964 6 log.go:172] (0xc001b2c280) (3) Data frame sent I0324 14:03:38.056972 6 log.go:172] (0xc00259b1e0) Data frame received for 3 I0324 14:03:38.056982 6 log.go:172] (0xc001b2c280) (3) Data frame handling I0324 14:03:38.057411 6 log.go:172] (0xc00259b1e0) Data frame received for 5 I0324 14:03:38.057435 6 log.go:172] (0xc001b2c3c0) (5) Data frame handling I0324 14:03:38.059086 6 log.go:172] (0xc00259b1e0) Data frame received for 1 I0324 14:03:38.059112 6 log.go:172] (0xc001b2c1e0) (1) Data frame handling I0324 14:03:38.059125 6 log.go:172] (0xc001b2c1e0) (1) Data frame sent I0324 14:03:38.059149 6 log.go:172] (0xc00259b1e0) (0xc001b2c1e0) Stream removed, broadcasting: 1 I0324 14:03:38.059179 6 log.go:172] (0xc00259b1e0) Go away received I0324 14:03:38.059300 6 log.go:172] (0xc00259b1e0) (0xc001b2c1e0) Stream removed, broadcasting: 1 I0324 14:03:38.059333 6 log.go:172] (0xc00259b1e0) (0xc001b2c280) Stream removed, broadcasting: 3 I0324 14:03:38.059360 6 log.go:172] (0xc00259b1e0) (0xc001b2c3c0) Stream removed, broadcasting: 5 Mar 24 14:03:38.059: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:03:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8172" for this suite. Mar 24 14:04:18.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:04:18.170: INFO: namespace e2e-kubelet-etc-hosts-8172 deletion completed in 40.107077882s • [SLOW TEST:51.303 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:04:18.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3660 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 14:04:18.237: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 24 14:04:40.335: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.95 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3660 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:04:40.335: INFO: >>> kubeConfig: /root/.kube/config I0324 14:04:40.362848 6 log.go:172] (0xc002e6c160) (0xc0002488c0) Create stream I0324 14:04:40.362880 6 log.go:172] (0xc002e6c160) (0xc0002488c0) Stream added, broadcasting: 1 I0324 14:04:40.365007 6 log.go:172] (0xc002e6c160) Reply frame received for 1 I0324 14:04:40.365041 6 log.go:172] (0xc002e6c160) (0xc00260c960) Create stream I0324 14:04:40.365052 6 log.go:172] (0xc002e6c160) (0xc00260c960) Stream added, broadcasting: 3 I0324 14:04:40.366194 6 log.go:172] (0xc002e6c160) Reply frame received for 3 I0324 14:04:40.366215 6 log.go:172] (0xc002e6c160) (0xc002210640) Create stream I0324 14:04:40.366221 6 log.go:172] (0xc002e6c160) (0xc002210640) Stream added, broadcasting: 5 I0324 14:04:40.367263 6 log.go:172] (0xc002e6c160) Reply frame received for 5 I0324 14:04:41.441105 6 log.go:172] (0xc002e6c160) Data frame received for 3 I0324 14:04:41.441322 6 log.go:172] (0xc00260c960) (3) Data frame handling I0324 14:04:41.441345 6 log.go:172] (0xc00260c960) (3) Data frame sent I0324 14:04:41.441421 6 log.go:172] (0xc002e6c160) Data frame received for 5 I0324 14:04:41.441450 6 log.go:172] (0xc002210640) (5) Data frame handling I0324 14:04:41.441956 6 log.go:172] (0xc002e6c160) Data frame received for 3 I0324 14:04:41.441989 6 log.go:172] (0xc00260c960) (3) Data frame handling I0324 14:04:41.444160 6 log.go:172] (0xc002e6c160) Data frame received for 1 I0324 14:04:41.444199 6 log.go:172] (0xc0002488c0) (1) Data frame handling I0324 14:04:41.444228 6 log.go:172] (0xc0002488c0) (1) Data frame sent I0324 14:04:41.444256 6 log.go:172] (0xc002e6c160) (0xc0002488c0) Stream removed, broadcasting: 1 I0324 14:04:41.444288 6 log.go:172] (0xc002e6c160) Go away received I0324 14:04:41.444444 6 log.go:172] (0xc002e6c160) (0xc0002488c0) Stream removed, broadcasting: 1 I0324 14:04:41.444468 6 log.go:172] (0xc002e6c160) (0xc00260c960) Stream removed, broadcasting: 3 I0324 14:04:41.444487 6 log.go:172] (0xc002e6c160) (0xc002210640) Stream removed, broadcasting: 5 Mar 24 14:04:41.444: INFO: Found all expected endpoints: [netserver-0] Mar 24 14:04:41.448: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.234 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3660 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:04:41.448: INFO: >>> kubeConfig: /root/.kube/config I0324 14:04:41.479520 6 log.go:172] (0xc002e6cd10) (0xc000248dc0) Create stream I0324 14:04:41.479550 6 log.go:172] (0xc002e6cd10) (0xc000248dc0) Stream added, broadcasting: 1 I0324 14:04:41.481748 6 log.go:172] (0xc002e6cd10) Reply frame received for 1 I0324 14:04:41.481787 6 log.go:172] (0xc002e6cd10) (0xc002cb7400) Create stream I0324 14:04:41.481802 6 log.go:172] (0xc002e6cd10) (0xc002cb7400) Stream added, broadcasting: 3 I0324 14:04:41.482750 6 log.go:172] (0xc002e6cd10) Reply frame received for 3 I0324 14:04:41.482793 6 log.go:172] (0xc002e6cd10) (0xc0022108c0) Create stream I0324 14:04:41.482807 6 log.go:172] (0xc002e6cd10) (0xc0022108c0) Stream added, broadcasting: 5 I0324 14:04:41.483654 6 log.go:172] (0xc002e6cd10) Reply frame received for 5 I0324 14:04:42.568936 6 log.go:172] (0xc002e6cd10) Data frame received for 3 I0324 14:04:42.568990 6 log.go:172] (0xc002cb7400) (3) Data frame handling I0324 14:04:42.569099 6 log.go:172] (0xc002cb7400) (3) Data frame sent I0324 14:04:42.569307 6 log.go:172] (0xc002e6cd10) Data frame received for 3 I0324 14:04:42.569340 6 log.go:172] (0xc002cb7400) (3) Data frame handling I0324 14:04:42.569383 6 log.go:172] (0xc002e6cd10) Data frame received for 5 I0324 14:04:42.569433 6 log.go:172] (0xc0022108c0) (5) Data frame handling I0324 14:04:42.571618 6 log.go:172] (0xc002e6cd10) Data frame received for 1 I0324 14:04:42.571639 6 log.go:172] (0xc000248dc0) (1) Data frame handling I0324 14:04:42.571652 6 log.go:172] (0xc000248dc0) (1) Data frame sent I0324 14:04:42.571666 6 log.go:172] (0xc002e6cd10) (0xc000248dc0) Stream removed, broadcasting: 1 I0324 14:04:42.571780 6 log.go:172] (0xc002e6cd10) (0xc000248dc0) Stream removed, broadcasting: 1 I0324 14:04:42.571797 6 log.go:172] (0xc002e6cd10) (0xc002cb7400) Stream removed, broadcasting: 3 I0324 14:04:42.571846 6 log.go:172] (0xc002e6cd10) (0xc0022108c0) Stream removed, broadcasting: 5 Mar 24 14:04:42.571: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 I0324 14:04:42.571954 6 log.go:172] (0xc002e6cd10) Go away received Mar 24 14:04:42.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3660" for this suite. Mar 24 14:05:06.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:05:06.670: INFO: namespace pod-network-test-3660 deletion completed in 24.09332846s • [SLOW TEST:48.499 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:05:06.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 24 14:05:06.758: INFO: Waiting up to 5m0s for pod "var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3" in namespace "var-expansion-6435" to be "success or failure" Mar 24 14:05:06.765: INFO: Pod "var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.485472ms Mar 24 14:05:08.770: INFO: Pod "var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012084389s Mar 24 14:05:10.774: INFO: Pod "var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016279878s STEP: Saw pod success Mar 24 14:05:10.774: INFO: Pod "var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3" satisfied condition "success or failure" Mar 24 14:05:10.777: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3 container dapi-container: STEP: delete the pod Mar 24 14:05:10.896: INFO: Waiting for pod var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3 to disappear Mar 24 14:05:10.957: INFO: Pod var-expansion-45ad3a85-2992-403e-a2f6-5e03d25fd3a3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:05:10.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6435" for this suite. Mar 24 14:05:16.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:05:17.055: INFO: namespace var-expansion-6435 deletion completed in 6.094197092s • [SLOW TEST:10.385 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:05:17.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e7735d97-7406-4027-bce3-8c0564611f08 STEP: Creating a pod to test consume secrets Mar 24 14:05:17.143: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52" in namespace "projected-9734" to be "success or failure" Mar 24 14:05:17.162: INFO: Pod "pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52": Phase="Pending", Reason="", readiness=false. Elapsed: 18.205555ms Mar 24 14:05:19.172: INFO: Pod "pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02837647s Mar 24 14:05:21.176: INFO: Pod "pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032723401s STEP: Saw pod success Mar 24 14:05:21.176: INFO: Pod "pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52" satisfied condition "success or failure" Mar 24 14:05:21.180: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52 container secret-volume-test: STEP: delete the pod Mar 24 14:05:21.203: INFO: Waiting for pod pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52 to disappear Mar 24 14:05:21.207: INFO: Pod pod-projected-secrets-509095c3-871f-4a83-819e-2edf5bc25b52 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:05:21.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9734" for this suite. Mar 24 14:05:27.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:05:27.370: INFO: namespace projected-9734 deletion completed in 6.159050851s • [SLOW TEST:10.314 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:05:27.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5916 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5916 STEP: Deleting pre-stop pod Mar 24 14:05:40.482: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:05:40.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5916" for this suite. Mar 24 14:06:18.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:06:18.651: INFO: namespace prestop-5916 deletion completed in 38.157394681s • [SLOW TEST:51.280 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:06:18.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:06:18.786: INFO: Create a RollingUpdate DaemonSet Mar 24 14:06:18.790: INFO: Check that daemon pods launch on every node of the cluster Mar 24 14:06:18.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:18.814: INFO: Number of nodes with available pods: 0 Mar 24 14:06:18.814: INFO: Node iruya-worker is running more than one daemon pod Mar 24 14:06:19.819: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:19.822: INFO: Number of nodes with available pods: 0 Mar 24 14:06:19.822: INFO: Node iruya-worker is running more than one daemon pod Mar 24 14:06:20.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:20.896: INFO: Number of nodes with available pods: 0 Mar 24 14:06:20.896: INFO: Node iruya-worker is running more than one daemon pod Mar 24 14:06:21.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:21.822: INFO: Number of nodes with available pods: 1 Mar 24 14:06:21.822: INFO: Node iruya-worker2 is running more than one daemon pod Mar 24 14:06:22.829: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:22.831: INFO: Number of nodes with available pods: 2 Mar 24 14:06:22.831: INFO: Number of running nodes: 2, number of available pods: 2 Mar 24 14:06:22.831: INFO: Update the DaemonSet to trigger a rollout Mar 24 14:06:22.837: INFO: Updating DaemonSet daemon-set Mar 24 14:06:32.878: INFO: Roll back the DaemonSet before rollout is complete Mar 24 14:06:32.886: INFO: Updating DaemonSet daemon-set Mar 24 14:06:32.886: INFO: Make sure DaemonSet rollback is complete Mar 24 14:06:32.893: INFO: Wrong image for pod: daemon-set-fn62p. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 24 14:06:32.893: INFO: Pod daemon-set-fn62p is not available Mar 24 14:06:32.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:33.917: INFO: Wrong image for pod: daemon-set-fn62p. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 24 14:06:33.917: INFO: Pod daemon-set-fn62p is not available Mar 24 14:06:33.921: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:34.917: INFO: Wrong image for pod: daemon-set-fn62p. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 24 14:06:34.917: INFO: Pod daemon-set-fn62p is not available Mar 24 14:06:34.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 14:06:35.918: INFO: Pod daemon-set-b64r9 is not available Mar 24 14:06:35.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6774, will wait for the garbage collector to delete the pods Mar 24 14:06:35.987: INFO: Deleting DaemonSet.extensions daemon-set took: 6.892933ms Mar 24 14:06:36.287: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.291614ms Mar 24 14:06:41.890: INFO: Number of nodes with available pods: 0 Mar 24 14:06:41.890: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 14:06:41.893: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6774/daemonsets","resourceVersion":"1606986"},"items":null} Mar 24 14:06:41.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6774/pods","resourceVersion":"1606986"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:06:41.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6774" for this suite. Mar 24 14:06:47.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:06:48.004: INFO: namespace daemonsets-6774 deletion completed in 6.095963701s • [SLOW TEST:29.354 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:06:48.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 24 14:06:48.063: INFO: Waiting up to 5m0s for pod "downward-api-cb1fa740-9916-4a65-9438-2eb64049b104" in namespace "downward-api-7295" to be "success or failure" Mar 24 14:06:48.079: INFO: Pod "downward-api-cb1fa740-9916-4a65-9438-2eb64049b104": Phase="Pending", Reason="", readiness=false. Elapsed: 15.76924ms Mar 24 14:06:50.083: INFO: Pod "downward-api-cb1fa740-9916-4a65-9438-2eb64049b104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019966363s Mar 24 14:06:52.087: INFO: Pod "downward-api-cb1fa740-9916-4a65-9438-2eb64049b104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024110848s STEP: Saw pod success Mar 24 14:06:52.087: INFO: Pod "downward-api-cb1fa740-9916-4a65-9438-2eb64049b104" satisfied condition "success or failure" Mar 24 14:06:52.091: INFO: Trying to get logs from node iruya-worker pod downward-api-cb1fa740-9916-4a65-9438-2eb64049b104 container dapi-container: STEP: delete the pod Mar 24 14:06:52.152: INFO: Waiting for pod downward-api-cb1fa740-9916-4a65-9438-2eb64049b104 to disappear Mar 24 14:06:52.155: INFO: Pod downward-api-cb1fa740-9916-4a65-9438-2eb64049b104 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:06:52.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7295" for this suite. Mar 24 14:06:58.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:06:58.261: INFO: namespace downward-api-7295 deletion completed in 6.103069213s • [SLOW TEST:10.256 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:06:58.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:06:58.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 24 14:06:58.436: INFO: stderr: "" Mar 24 14:06:58.436: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:06:58.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3930" for this suite. Mar 24 14:07:04.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:07:04.550: INFO: namespace kubectl-3930 deletion completed in 6.11012814s • [SLOW TEST:6.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:07:04.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:07:04.594: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 24 14:07:04.643: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 24 14:07:09.648: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 24 14:07:09.648: INFO: Creating deployment "test-rolling-update-deployment" Mar 24 14:07:09.653: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 24 14:07:09.666: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 24 14:07:11.713: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 24 14:07:11.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720655629, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720655629, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720655629, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720655629, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 14:07:13.746: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 24 14:07:13.757: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1358,SelfLink:/apis/apps/v1/namespaces/deployment-1358/deployments/test-rolling-update-deployment,UID:8e627a2d-3d1d-459e-98b3-f4fc4587191a,ResourceVersion:1607150,Generation:1,CreationTimestamp:2020-03-24 14:07:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-24 14:07:09 +0000 UTC 2020-03-24 14:07:09 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-24 14:07:12 +0000 UTC 2020-03-24 14:07:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 24 14:07:13.761: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1358,SelfLink:/apis/apps/v1/namespaces/deployment-1358/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:48e854b1-dbed-4547-bce5-ea9afa8c4f76,ResourceVersion:1607139,Generation:1,CreationTimestamp:2020-03-24 14:07:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8e627a2d-3d1d-459e-98b3-f4fc4587191a 0xc003076357 0xc003076358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 24 14:07:13.761: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 24 14:07:13.761: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1358,SelfLink:/apis/apps/v1/namespaces/deployment-1358/replicasets/test-rolling-update-controller,UID:3120ffe8-e050-4c11-b139-f29826e93261,ResourceVersion:1607148,Generation:2,CreationTimestamp:2020-03-24 14:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8e627a2d-3d1d-459e-98b3-f4fc4587191a 0xc00307626f 0xc003076280}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 14:07:13.765: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-sggx7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-sggx7,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1358,SelfLink:/api/v1/namespaces/deployment-1358/pods/test-rolling-update-deployment-79f6b9d75c-sggx7,UID:af4f5fa3-ff30-461c-b35a-dfe3991a5169,ResourceVersion:1607138,Generation:0,CreationTimestamp:2020-03-24 14:07:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 48e854b1-dbed-4547-bce5-ea9afa8c4f76 0xc00292c947 0xc00292c948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtpfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtpfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gtpfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00292c9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00292c9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:07:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:07:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:07:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:07:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.100,StartTime:2020-03-24 14:07:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-24 14:07:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e239e2ac843d06b28301a0d029f5451e3b22ab6c7a76064875f05b545b091299}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:07:13.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1358" for this suite. Mar 24 14:07:19.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:07:19.877: INFO: namespace deployment-1358 deletion completed in 6.108326324s • [SLOW TEST:15.326 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:07:19.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:07:19.924: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:07:21.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3917" for this suite. Mar 24 14:07:27.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:07:27.127: INFO: namespace custom-resource-definition-3917 deletion completed in 6.09359567s • [SLOW TEST:7.250 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:07:27.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:07:32.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-766" for this suite. Mar 24 14:07:54.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:07:54.359: INFO: namespace replication-controller-766 deletion completed in 22.093413662s • [SLOW TEST:27.230 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:07:54.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-82451e19-fc1c-457c-b48d-e53eadf5bb34 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:07:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6860" for this suite. Mar 24 14:08:00.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:08:00.536: INFO: namespace secrets-6860 deletion completed in 6.09673012s • [SLOW TEST:6.177 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:08:00.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-aff00486-8eb1-496a-8685-0f24903f92c9 STEP: Creating a pod to test consume configMaps Mar 24 14:08:00.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4" in namespace "configmap-1701" to be "success or failure" Mar 24 14:08:00.648: INFO: Pod "pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.031091ms Mar 24 14:08:02.652: INFO: Pod "pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032304032s Mar 24 14:08:04.657: INFO: Pod "pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036855375s STEP: Saw pod success Mar 24 14:08:04.657: INFO: Pod "pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4" satisfied condition "success or failure" Mar 24 14:08:04.660: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4 container configmap-volume-test: STEP: delete the pod Mar 24 14:08:04.691: INFO: Waiting for pod pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4 to disappear Mar 24 14:08:04.702: INFO: Pod pod-configmaps-c8f0c24b-78a0-4486-99b2-7668adf58ac4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:08:04.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1701" for this suite. Mar 24 14:08:10.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:08:10.795: INFO: namespace configmap-1701 deletion completed in 6.088596365s • [SLOW TEST:10.258 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:08:10.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 24 14:08:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6335 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 24 14:08:13.970: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0324 14:08:13.904044 2832 log.go:172] (0xc000a94160) (0xc00075c140) Create stream\nI0324 14:08:13.904134 2832 log.go:172] (0xc000a94160) (0xc00075c140) Stream added, broadcasting: 1\nI0324 14:08:13.907729 2832 log.go:172] (0xc000a94160) Reply frame received for 1\nI0324 14:08:13.907781 2832 log.go:172] (0xc000a94160) (0xc000436960) Create stream\nI0324 14:08:13.907798 2832 log.go:172] (0xc000a94160) (0xc000436960) Stream added, broadcasting: 3\nI0324 14:08:13.909003 2832 log.go:172] (0xc000a94160) Reply frame received for 3\nI0324 14:08:13.909047 2832 log.go:172] (0xc000a94160) (0xc000436a00) Create stream\nI0324 14:08:13.909058 2832 log.go:172] (0xc000a94160) (0xc000436a00) Stream added, broadcasting: 5\nI0324 14:08:13.910713 2832 log.go:172] (0xc000a94160) Reply frame received for 5\nI0324 14:08:13.910763 2832 log.go:172] (0xc000a94160) (0xc0003f6780) Create stream\nI0324 14:08:13.910775 2832 log.go:172] (0xc000a94160) (0xc0003f6780) Stream added, broadcasting: 7\nI0324 14:08:13.911843 2832 log.go:172] (0xc000a94160) Reply frame received for 7\nI0324 14:08:13.912017 2832 log.go:172] (0xc000436960) (3) Writing data frame\nI0324 14:08:13.912218 2832 log.go:172] (0xc000436960) (3) Writing data frame\nI0324 14:08:13.913279 2832 log.go:172] (0xc000a94160) Data frame received for 5\nI0324 14:08:13.913309 2832 log.go:172] (0xc000436a00) (5) Data frame handling\nI0324 14:08:13.913320 2832 log.go:172] (0xc000436a00) (5) Data frame sent\nI0324 14:08:13.913920 2832 log.go:172] (0xc000a94160) Data frame received for 5\nI0324 14:08:13.913942 2832 log.go:172] (0xc000436a00) (5) Data frame handling\nI0324 14:08:13.913961 2832 log.go:172] (0xc000436a00) (5) Data frame sent\nI0324 14:08:13.947654 2832 log.go:172] (0xc000a94160) Data frame received for 5\nI0324 14:08:13.947674 2832 log.go:172] (0xc000436a00) (5) Data frame handling\nI0324 14:08:13.947704 2832 log.go:172] (0xc000a94160) Data frame received for 7\nI0324 14:08:13.947743 2832 log.go:172] (0xc0003f6780) (7) Data frame handling\nI0324 14:08:13.948232 2832 log.go:172] (0xc000a94160) Data frame received for 1\nI0324 14:08:13.948259 2832 log.go:172] (0xc00075c140) (1) Data frame handling\nI0324 14:08:13.948270 2832 log.go:172] (0xc00075c140) (1) Data frame sent\nI0324 14:08:13.948282 2832 log.go:172] (0xc000a94160) (0xc00075c140) Stream removed, broadcasting: 1\nI0324 14:08:13.948309 2832 log.go:172] (0xc000a94160) (0xc000436960) Stream removed, broadcasting: 3\nI0324 14:08:13.948438 2832 log.go:172] (0xc000a94160) (0xc00075c140) Stream removed, broadcasting: 1\nI0324 14:08:13.948451 2832 log.go:172] (0xc000a94160) (0xc000436960) Stream removed, broadcasting: 3\nI0324 14:08:13.948459 2832 log.go:172] (0xc000a94160) (0xc000436a00) Stream removed, broadcasting: 5\nI0324 14:08:13.948465 2832 log.go:172] (0xc000a94160) (0xc0003f6780) Stream removed, broadcasting: 7\nI0324 14:08:13.948484 2832 log.go:172] (0xc000a94160) Go away received\n" Mar 24 14:08:13.971: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:08:15.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6335" for this suite. Mar 24 14:08:23.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:08:24.091: INFO: namespace kubectl-6335 deletion completed in 8.10966738s • [SLOW TEST:13.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:08:24.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-369e35b7-9e6c-4576-88c3-ad6646205c98 in namespace container-probe-7775 Mar 24 14:08:28.175: INFO: Started pod test-webserver-369e35b7-9e6c-4576-88c3-ad6646205c98 in namespace container-probe-7775 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 14:08:28.178: INFO: Initial restart count of pod test-webserver-369e35b7-9e6c-4576-88c3-ad6646205c98 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:12:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7775" for this suite. Mar 24 14:12:34.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:12:34.924: INFO: namespace container-probe-7775 deletion completed in 6.133993644s • [SLOW TEST:250.831 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:12:34.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0324 14:12:45.007564 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 14:12:45.007: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:12:45.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7534" for this suite. Mar 24 14:12:51.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:12:51.106: INFO: namespace gc-7534 deletion completed in 6.0944473s • [SLOW TEST:16.182 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:12:51.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-eace37a9-e5fd-4f69-9ecf-d00b98469047 STEP: Creating configMap with name cm-test-opt-upd-5a23b60e-aa41-430b-b908-ea9e8f62e604 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-eace37a9-e5fd-4f69-9ecf-d00b98469047 STEP: Updating configmap cm-test-opt-upd-5a23b60e-aa41-430b-b908-ea9e8f62e604 STEP: Creating configMap with name cm-test-opt-create-3dc19c9c-575a-4ea3-986f-5e6a078e65d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:14:17.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-839" for this suite. Mar 24 14:14:39.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:14:39.780: INFO: namespace configmap-839 deletion completed in 22.090739454s • [SLOW TEST:108.674 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:14:39.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 24 14:14:44.368: INFO: Successfully updated pod "annotationupdate075bb828-cc23-4845-b488-2c03d0622f2d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:14:46.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2603" for this suite. Mar 24 14:15:08.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:15:08.494: INFO: namespace downward-api-2603 deletion completed in 22.092567118s • [SLOW TEST:28.714 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:15:08.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2f5472a1-2aa5-479e-98d8-fbaf89c2679e in namespace container-probe-9213 Mar 24 14:15:12.583: INFO: Started pod busybox-2f5472a1-2aa5-479e-98d8-fbaf89c2679e in namespace container-probe-9213 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 14:15:12.586: INFO: Initial restart count of pod busybox-2f5472a1-2aa5-479e-98d8-fbaf89c2679e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:19:13.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9213" for this suite. Mar 24 14:19:19.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:19:19.292: INFO: namespace container-probe-9213 deletion completed in 6.102753725s • [SLOW TEST:250.797 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:19:19.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-909d8ead-fddd-48dc-9cbc-224545b03cec STEP: Creating the pod STEP: Updating configmap configmap-test-upd-909d8ead-fddd-48dc-9cbc-224545b03cec STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:20:29.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6992" for this suite. Mar 24 14:20:51.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:20:51.811: INFO: namespace configmap-6992 deletion completed in 22.090915326s • [SLOW TEST:92.519 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:20:51.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-9e58974c-33b0-4838-a866-7f05898f96a5 STEP: Creating secret with name s-test-opt-upd-704a0a6a-9831-484c-ba66-8301ae0703b7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9e58974c-33b0-4838-a866-7f05898f96a5 STEP: Updating secret s-test-opt-upd-704a0a6a-9831-484c-ba66-8301ae0703b7 STEP: Creating secret with name s-test-opt-create-1313f822-dfae-421f-8a8a-78c8c9bcef20 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:22:08.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9378" for this suite. Mar 24 14:22:30.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:22:30.373: INFO: namespace projected-9378 deletion completed in 22.096686451s • [SLOW TEST:98.562 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:22:30.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 14:22:30.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-660' Mar 24 14:22:32.974: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 24 14:22:32.974: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 24 14:22:33.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-660' Mar 24 14:22:33.150: INFO: stderr: "" Mar 24 14:22:33.150: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:22:33.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-660" for this suite. Mar 24 14:22:39.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:22:39.261: INFO: namespace kubectl-660 deletion completed in 6.107636749s • [SLOW TEST:8.887 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:22:39.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 24 14:22:47.383: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 24 14:22:47.390: INFO: Pod pod-with-prestop-http-hook still exists Mar 24 14:22:49.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 24 14:22:49.395: INFO: Pod pod-with-prestop-http-hook still exists Mar 24 14:22:51.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 24 14:22:51.395: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:22:51.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9646" for this suite. Mar 24 14:23:13.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:23:13.527: INFO: namespace container-lifecycle-hook-9646 deletion completed in 22.123194452s • [SLOW TEST:34.266 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:23:13.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d3d0d0fd-5b1d-49a2-8579-c880d536796d STEP: Creating a pod to test consume configMaps Mar 24 14:23:13.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302" in namespace "projected-50" to be "success or failure" Mar 24 14:23:13.612: INFO: Pod "pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214743ms Mar 24 14:23:15.615: INFO: Pod "pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007684663s Mar 24 14:23:17.620: INFO: Pod "pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012440844s STEP: Saw pod success Mar 24 14:23:17.620: INFO: Pod "pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302" satisfied condition "success or failure" Mar 24 14:23:17.623: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302 container projected-configmap-volume-test: STEP: delete the pod Mar 24 14:23:17.643: INFO: Waiting for pod pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302 to disappear Mar 24 14:23:17.681: INFO: Pod pod-projected-configmaps-20424984-a406-4720-ba60-dc0af3f6f302 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:23:17.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-50" for this suite. Mar 24 14:23:23.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:23:23.776: INFO: namespace projected-50 deletion completed in 6.09115096s • [SLOW TEST:10.248 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:23:23.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 14:23:23.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e" in namespace "downward-api-4292" to be "success or failure" Mar 24 14:23:23.893: INFO: Pod "downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.4756ms Mar 24 14:23:25.896: INFO: Pod "downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018117435s Mar 24 14:23:27.900: INFO: Pod "downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022147569s STEP: Saw pod success Mar 24 14:23:27.900: INFO: Pod "downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e" satisfied condition "success or failure" Mar 24 14:23:27.903: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e container client-container: STEP: delete the pod Mar 24 14:23:27.931: INFO: Waiting for pod downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e to disappear Mar 24 14:23:27.936: INFO: Pod downwardapi-volume-cbbd96f2-4443-48ef-b7f2-51a0b17de90e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:23:27.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4292" for this suite. Mar 24 14:23:33.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:23:34.030: INFO: namespace downward-api-4292 deletion completed in 6.090503081s • [SLOW TEST:10.254 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:23:34.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 24 14:23:34.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3288' Mar 24 14:23:34.243: INFO: stderr: "" Mar 24 14:23:34.243: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 24 14:23:39.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3288 -o json' Mar 24 14:23:39.394: INFO: stderr: "" Mar 24 14:23:39.394: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-24T14:23:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3288\",\n \"resourceVersion\": \"1609575\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3288/pods/e2e-test-nginx-pod\",\n \"uid\": \"64454522-23f7-419d-b66e-7a54e09345cf\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jv2qb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jv2qb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jv2qb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-24T14:23:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-24T14:23:37Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-24T14:23:37Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-24T14:23:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a2e3a745a907f365ac55593e4bb82a361d5ade6f769cb572bd1dcabedf7896a1\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-24T14:23:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.250\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-24T14:23:34Z\"\n }\n}\n" STEP: replace the image in the pod Mar 24 14:23:39.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3288' Mar 24 14:23:39.627: INFO: stderr: "" Mar 24 14:23:39.628: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 24 14:23:39.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3288' Mar 24 14:23:42.411: INFO: stderr: "" Mar 24 14:23:42.411: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:23:42.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3288" for this suite. Mar 24 14:23:48.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:23:48.527: INFO: namespace kubectl-3288 deletion completed in 6.1108233s • [SLOW TEST:14.496 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:23:48.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7675 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 14:23:48.596: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 24 14:24:14.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.110:8080/dial?request=hostName&protocol=http&host=10.244.2.251&port=8080&tries=1'] Namespace:pod-network-test-7675 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:24:14.695: INFO: >>> kubeConfig: /root/.kube/config I0324 14:24:14.735928 6 log.go:172] (0xc000d4ca50) (0xc0027e5040) Create stream I0324 14:24:14.735966 6 log.go:172] (0xc000d4ca50) (0xc0027e5040) Stream added, broadcasting: 1 I0324 14:24:14.742738 6 log.go:172] (0xc000d4ca50) Reply frame received for 1 I0324 14:24:14.742810 6 log.go:172] (0xc000d4ca50) (0xc0027e50e0) Create stream I0324 14:24:14.742828 6 log.go:172] (0xc000d4ca50) (0xc0027e50e0) Stream added, broadcasting: 3 I0324 14:24:14.743987 6 log.go:172] (0xc000d4ca50) Reply frame received for 3 I0324 14:24:14.744026 6 log.go:172] (0xc000d4ca50) (0xc001090000) Create stream I0324 14:24:14.744039 6 log.go:172] (0xc000d4ca50) (0xc001090000) Stream added, broadcasting: 5 I0324 14:24:14.744965 6 log.go:172] (0xc000d4ca50) Reply frame received for 5 I0324 14:24:14.828734 6 log.go:172] (0xc000d4ca50) Data frame received for 3 I0324 14:24:14.828764 6 log.go:172] (0xc0027e50e0) (3) Data frame handling I0324 14:24:14.828780 6 log.go:172] (0xc0027e50e0) (3) Data frame sent I0324 14:24:14.829641 6 log.go:172] (0xc000d4ca50) Data frame received for 3 I0324 14:24:14.829665 6 log.go:172] (0xc0027e50e0) (3) Data frame handling I0324 14:24:14.829696 6 log.go:172] (0xc000d4ca50) Data frame received for 5 I0324 14:24:14.829717 6 log.go:172] (0xc001090000) (5) Data frame handling I0324 14:24:14.831349 6 log.go:172] (0xc000d4ca50) Data frame received for 1 I0324 14:24:14.831370 6 log.go:172] (0xc0027e5040) (1) Data frame handling I0324 14:24:14.831383 6 log.go:172] (0xc0027e5040) (1) Data frame sent I0324 14:24:14.831407 6 log.go:172] (0xc000d4ca50) (0xc0027e5040) Stream removed, broadcasting: 1 I0324 14:24:14.831496 6 log.go:172] (0xc000d4ca50) (0xc0027e5040) Stream removed, broadcasting: 1 I0324 14:24:14.831514 6 log.go:172] (0xc000d4ca50) (0xc0027e50e0) Stream removed, broadcasting: 3 I0324 14:24:14.831595 6 log.go:172] (0xc000d4ca50) Go away received I0324 14:24:14.831718 6 log.go:172] (0xc000d4ca50) (0xc001090000) Stream removed, broadcasting: 5 Mar 24 14:24:14.831: INFO: Waiting for endpoints: map[] Mar 24 14:24:14.840: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.110:8080/dial?request=hostName&protocol=http&host=10.244.1.109&port=8080&tries=1'] Namespace:pod-network-test-7675 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 14:24:14.840: INFO: >>> kubeConfig: /root/.kube/config I0324 14:24:14.872529 6 log.go:172] (0xc0026d80b0) (0xc001b2cd20) Create stream I0324 14:24:14.872562 6 log.go:172] (0xc0026d80b0) (0xc001b2cd20) Stream added, broadcasting: 1 I0324 14:24:14.875053 6 log.go:172] (0xc0026d80b0) Reply frame received for 1 I0324 14:24:14.875116 6 log.go:172] (0xc0026d80b0) (0xc001b2ce60) Create stream I0324 14:24:14.875137 6 log.go:172] (0xc0026d80b0) (0xc001b2ce60) Stream added, broadcasting: 3 I0324 14:24:14.876096 6 log.go:172] (0xc0026d80b0) Reply frame received for 3 I0324 14:24:14.876137 6 log.go:172] (0xc0026d80b0) (0xc001b2cf00) Create stream I0324 14:24:14.876150 6 log.go:172] (0xc0026d80b0) (0xc001b2cf00) Stream added, broadcasting: 5 I0324 14:24:14.876948 6 log.go:172] (0xc0026d80b0) Reply frame received for 5 I0324 14:24:14.931090 6 log.go:172] (0xc0026d80b0) Data frame received for 3 I0324 14:24:14.931128 6 log.go:172] (0xc001b2ce60) (3) Data frame handling I0324 14:24:14.931148 6 log.go:172] (0xc001b2ce60) (3) Data frame sent I0324 14:24:14.931528 6 log.go:172] (0xc0026d80b0) Data frame received for 3 I0324 14:24:14.931544 6 log.go:172] (0xc001b2ce60) (3) Data frame handling I0324 14:24:14.931868 6 log.go:172] (0xc0026d80b0) Data frame received for 5 I0324 14:24:14.931893 6 log.go:172] (0xc001b2cf00) (5) Data frame handling I0324 14:24:14.933605 6 log.go:172] (0xc0026d80b0) Data frame received for 1 I0324 14:24:14.933627 6 log.go:172] (0xc001b2cd20) (1) Data frame handling I0324 14:24:14.933641 6 log.go:172] (0xc001b2cd20) (1) Data frame sent I0324 14:24:14.933660 6 log.go:172] (0xc0026d80b0) (0xc001b2cd20) Stream removed, broadcasting: 1 I0324 14:24:14.933685 6 log.go:172] (0xc0026d80b0) Go away received I0324 14:24:14.933755 6 log.go:172] (0xc0026d80b0) (0xc001b2cd20) Stream removed, broadcasting: 1 I0324 14:24:14.933775 6 log.go:172] (0xc0026d80b0) (0xc001b2ce60) Stream removed, broadcasting: 3 I0324 14:24:14.933781 6 log.go:172] (0xc0026d80b0) (0xc001b2cf00) Stream removed, broadcasting: 5 Mar 24 14:24:14.933: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:24:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7675" for this suite. Mar 24 14:24:38.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:24:39.084: INFO: namespace pod-network-test-7675 deletion completed in 24.1464015s • [SLOW TEST:50.557 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:24:39.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 24 14:24:43.204: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 24 14:24:53.307: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:24:53.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-108" for this suite. Mar 24 14:24:59.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:24:59.545: INFO: namespace pods-108 deletion completed in 6.230477069s • [SLOW TEST:20.460 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:24:59.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 24 14:25:04.151: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4798 pod-service-account-b2be8737-3c14-4682-9850-21b4f68ebfa3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 24 14:25:04.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4798 pod-service-account-b2be8737-3c14-4682-9850-21b4f68ebfa3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 24 14:25:04.561: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4798 pod-service-account-b2be8737-3c14-4682-9850-21b4f68ebfa3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:25:04.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4798" for this suite. Mar 24 14:25:10.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:25:10.898: INFO: namespace svcaccounts-4798 deletion completed in 6.133748759s • [SLOW TEST:11.353 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:25:10.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-10474fbf-87cf-4361-8724-823722ddd297 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-10474fbf-87cf-4361-8724-823722ddd297 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:26:41.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5355" for this suite. Mar 24 14:27:03.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:03.474: INFO: namespace projected-5355 deletion completed in 22.103312847s • [SLOW TEST:112.576 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:03.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 14:27:03.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf" in namespace "projected-8495" to be "success or failure" Mar 24 14:27:03.539: INFO: Pod "downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573359ms Mar 24 14:27:05.543: INFO: Pod "downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007711778s Mar 24 14:27:07.548: INFO: Pod "downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01236065s STEP: Saw pod success Mar 24 14:27:07.548: INFO: Pod "downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf" satisfied condition "success or failure" Mar 24 14:27:07.551: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf container client-container: STEP: delete the pod Mar 24 14:27:07.584: INFO: Waiting for pod downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf to disappear Mar 24 14:27:07.605: INFO: Pod downwardapi-volume-213e74c7-7cf6-484e-b439-2acefa8145cf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:07.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8495" for this suite. Mar 24 14:27:13.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:13.720: INFO: namespace projected-8495 deletion completed in 6.11178588s • [SLOW TEST:10.246 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:13.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 24 14:27:13.781: INFO: Waiting up to 5m0s for pod "pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb" in namespace "emptydir-2505" to be "success or failure" Mar 24 14:27:13.801: INFO: Pod "pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.087386ms Mar 24 14:27:15.805: INFO: Pod "pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024294742s Mar 24 14:27:17.810: INFO: Pod "pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0289981s STEP: Saw pod success Mar 24 14:27:17.810: INFO: Pod "pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb" satisfied condition "success or failure" Mar 24 14:27:17.813: INFO: Trying to get logs from node iruya-worker2 pod pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb container test-container: STEP: delete the pod Mar 24 14:27:17.828: INFO: Waiting for pod pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb to disappear Mar 24 14:27:17.847: INFO: Pod pod-e8806ff9-a6ae-4f6b-bfa0-6373732473fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:17.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2505" for this suite. Mar 24 14:27:23.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:23.934: INFO: namespace emptydir-2505 deletion completed in 6.084449546s • [SLOW TEST:10.214 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:23.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6897/configmap-test-9de1f6c0-53ac-450e-9cc7-f54bd2c90395 STEP: Creating a pod to test consume configMaps Mar 24 14:27:24.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b" in namespace "configmap-6897" to be "success or failure" Mar 24 14:27:24.239: INFO: Pod "pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634206ms Mar 24 14:27:26.350: INFO: Pod "pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115414299s Mar 24 14:27:28.354: INFO: Pod "pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119635372s STEP: Saw pod success Mar 24 14:27:28.354: INFO: Pod "pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b" satisfied condition "success or failure" Mar 24 14:27:28.357: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b container env-test: STEP: delete the pod Mar 24 14:27:28.378: INFO: Waiting for pod pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b to disappear Mar 24 14:27:28.382: INFO: Pod pod-configmaps-d0e01b36-946a-4885-9da7-9d149e4a952b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6897" for this suite. Mar 24 14:27:34.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:34.502: INFO: namespace configmap-6897 deletion completed in 6.116384348s • [SLOW TEST:10.567 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:34.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9ad899c6-6432-489c-8094-f75c2182f06f STEP: Creating a pod to test consume configMaps Mar 24 14:27:34.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604" in namespace "configmap-9656" to be "success or failure" Mar 24 14:27:34.641: INFO: Pod "pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604": Phase="Pending", Reason="", readiness=false. Elapsed: 16.024407ms Mar 24 14:27:36.645: INFO: Pod "pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019859226s Mar 24 14:27:38.650: INFO: Pod "pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024522903s STEP: Saw pod success Mar 24 14:27:38.650: INFO: Pod "pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604" satisfied condition "success or failure" Mar 24 14:27:38.653: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604 container configmap-volume-test: STEP: delete the pod Mar 24 14:27:38.672: INFO: Waiting for pod pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604 to disappear Mar 24 14:27:38.676: INFO: Pod pod-configmaps-e3b8c2c8-75bc-4dbc-a0bd-6fb9d7633604 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:38.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9656" for this suite. Mar 24 14:27:44.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:44.783: INFO: namespace configmap-9656 deletion completed in 6.103973542s • [SLOW TEST:10.281 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:44.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-59ea30a4-30c3-44d7-8bfa-dc280812b220 STEP: Creating a pod to test consume secrets Mar 24 14:27:44.872: INFO: Waiting up to 5m0s for pod "pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa" in namespace "secrets-130" to be "success or failure" Mar 24 14:27:44.876: INFO: Pod "pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.468774ms Mar 24 14:27:46.879: INFO: Pod "pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007107675s Mar 24 14:27:48.883: INFO: Pod "pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01119355s STEP: Saw pod success Mar 24 14:27:48.884: INFO: Pod "pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa" satisfied condition "success or failure" Mar 24 14:27:48.887: INFO: Trying to get logs from node iruya-worker pod pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa container secret-volume-test: STEP: delete the pod Mar 24 14:27:48.921: INFO: Waiting for pod pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa to disappear Mar 24 14:27:48.935: INFO: Pod pod-secrets-778e0f4a-09e5-40ef-bfee-23564e7089aa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:48.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-130" for this suite. Mar 24 14:27:54.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:27:55.030: INFO: namespace secrets-130 deletion completed in 6.091103559s • [SLOW TEST:10.246 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:27:55.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 24 14:27:55.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7340,SelfLink:/api/v1/namespaces/watch-7340/configmaps/e2e-watch-test-resource-version,UID:40d773e9-c10c-4100-a8dd-97399aa6484d,ResourceVersion:1610375,Generation:0,CreationTimestamp:2020-03-24 14:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 24 14:27:55.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7340,SelfLink:/api/v1/namespaces/watch-7340/configmaps/e2e-watch-test-resource-version,UID:40d773e9-c10c-4100-a8dd-97399aa6484d,ResourceVersion:1610376,Generation:0,CreationTimestamp:2020-03-24 14:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:27:55.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7340" for this suite. Mar 24 14:28:01.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:28:01.268: INFO: namespace watch-7340 deletion completed in 6.079794867s • [SLOW TEST:6.238 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:28:01.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 24 14:28:01.320: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 24 14:28:01.816: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 24 14:28:04.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720656881, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720656881, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720656881, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720656881, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 14:28:06.735: INFO: Waited 627.282133ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:28:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4738" for this suite. Mar 24 14:28:13.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:28:13.348: INFO: namespace aggregator-4738 deletion completed in 6.180451993s • [SLOW TEST:12.079 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:28:13.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 14:28:17.432: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:28:17.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6297" for this suite. Mar 24 14:28:23.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:28:23.582: INFO: namespace container-runtime-6297 deletion completed in 6.100431917s • [SLOW TEST:10.234 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:28:23.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5jmfm in namespace proxy-4294 I0324 14:28:23.708771 6 runners.go:180] Created replication controller with name: proxy-service-5jmfm, namespace: proxy-4294, replica count: 1 I0324 14:28:24.759299 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 14:28:25.759545 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 14:28:26.759859 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:27.760129 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:28.760337 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:29.760627 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:30.760858 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:31.761086 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:32.761450 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:33.761712 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:34.761979 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:35.762241 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0324 14:28:36.762469 6 runners.go:180] proxy-service-5jmfm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 24 14:28:36.766: INFO: setup took 13.13033071s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 24 14:28:36.774: INFO: (0) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 7.280752ms) Mar 24 14:28:36.774: INFO: (0) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 7.51639ms) Mar 24 14:28:36.774: INFO: (0) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 7.469688ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 10.655225ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 10.769562ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 10.867505ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 10.983572ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 11.073234ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 10.879615ms) Mar 24 14:28:36.777: INFO: (0) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 11.062577ms) Mar 24 14:28:36.778: INFO: (0) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 11.091488ms) Mar 24 14:28:36.780: INFO: (0) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 13.79696ms) Mar 24 14:28:36.780: INFO: (0) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 14.015328ms) Mar 24 14:28:36.781: INFO: (0) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 14.116619ms) Mar 24 14:28:36.783: INFO: (0) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test (200; 4.850106ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.868109ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 4.905338ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 5.004614ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 5.051151ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 5.125553ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 5.071975ms) Mar 24 14:28:36.789: INFO: (1) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 5.091508ms) Mar 24 14:28:36.791: INFO: (1) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 6.446626ms) Mar 24 14:28:36.791: INFO: (1) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 6.76345ms) Mar 24 14:28:36.791: INFO: (1) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 6.700917ms) Mar 24 14:28:36.791: INFO: (1) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 6.858254ms) Mar 24 14:28:36.791: INFO: (1) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 6.876037ms) Mar 24 14:28:36.794: INFO: (2) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 3.120791ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 3.691855ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.116219ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 4.13129ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 4.256217ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.269469ms) Mar 24 14:28:36.795: INFO: (2) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 4.264345ms) Mar 24 14:28:36.796: INFO: (2) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 4.404978ms) Mar 24 14:28:36.796: INFO: (2) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 4.919682ms) Mar 24 14:28:36.796: INFO: (2) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 4.87323ms) Mar 24 14:28:36.796: INFO: (2) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 4.93789ms) Mar 24 14:28:36.796: INFO: (2) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.911668ms) Mar 24 14:28:36.797: INFO: (2) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 5.300784ms) Mar 24 14:28:36.804: INFO: (3) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 7.890055ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 8.254047ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 8.018647ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 8.275813ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 8.099508ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 8.337428ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 8.183097ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 8.14992ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 8.230834ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 8.283419ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 8.444135ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 8.245785ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 8.471091ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 8.290592ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 8.195967ms) Mar 24 14:28:36.805: INFO: (3) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test (200; 5.023135ms) Mar 24 14:28:36.810: INFO: (4) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 5.075619ms) Mar 24 14:28:36.810: INFO: (4) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 5.059348ms) Mar 24 14:28:36.810: INFO: (4) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 5.066814ms) Mar 24 14:28:36.810: INFO: (4) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 5.066508ms) Mar 24 14:28:36.810: INFO: (4) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 5.166353ms) Mar 24 14:28:36.811: INFO: (4) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 5.43409ms) Mar 24 14:28:36.811: INFO: (4) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 5.350356ms) Mar 24 14:28:36.811: INFO: (4) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 5.410417ms) Mar 24 14:28:36.811: INFO: (4) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 5.468785ms) Mar 24 14:28:36.811: INFO: (4) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 5.431659ms) Mar 24 14:28:36.814: INFO: (5) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.421077ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 5.01414ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 5.199743ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 5.30233ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 5.316603ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 5.297812ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test (200; 5.412154ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 5.461623ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 5.527783ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 5.416197ms) Mar 24 14:28:36.816: INFO: (5) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 5.624948ms) Mar 24 14:28:36.818: INFO: (5) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 6.807755ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 3.050038ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.2606ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 3.570295ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.556631ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 3.523317ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.711783ms) Mar 24 14:28:36.821: INFO: (6) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.610264ms) Mar 24 14:28:36.822: INFO: (6) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 4.153797ms) Mar 24 14:28:36.822: INFO: (6) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.309815ms) Mar 24 14:28:36.822: INFO: (6) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 6.61131ms) Mar 24 14:28:36.830: INFO: (7) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 6.871523ms) Mar 24 14:28:36.830: INFO: (7) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 6.873924ms) Mar 24 14:28:36.830: INFO: (7) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 6.923987ms) Mar 24 14:28:36.830: INFO: (7) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 7.014597ms) Mar 24 14:28:36.830: INFO: (7) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 3.982062ms) Mar 24 14:28:36.834: INFO: (8) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 4.247342ms) Mar 24 14:28:36.834: INFO: (8) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.346547ms) Mar 24 14:28:36.834: INFO: (8) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.327078ms) Mar 24 14:28:36.834: INFO: (8) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.378748ms) Mar 24 14:28:36.835: INFO: (8) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 4.509752ms) Mar 24 14:28:36.835: INFO: (8) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 3.598647ms) Mar 24 14:28:36.840: INFO: (9) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 3.920799ms) Mar 24 14:28:36.840: INFO: (9) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.859717ms) Mar 24 14:28:36.840: INFO: (9) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 4.130481ms) Mar 24 14:28:36.840: INFO: (9) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.152201ms) Mar 24 14:28:36.840: INFO: (9) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.103404ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 4.738364ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 4.664612ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.768944ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 4.749317ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.807219ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 4.782745ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 4.75647ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.727785ms) Mar 24 14:28:36.841: INFO: (9) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 3.631234ms) Mar 24 14:28:36.845: INFO: (10) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 3.631385ms) Mar 24 14:28:36.845: INFO: (10) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.625537ms) Mar 24 14:28:36.845: INFO: (10) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test (200; 3.962635ms) Mar 24 14:28:36.845: INFO: (10) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.189176ms) Mar 24 14:28:36.846: INFO: (10) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 5.06135ms) Mar 24 14:28:36.846: INFO: (10) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 5.036774ms) Mar 24 14:28:36.847: INFO: (10) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 5.143754ms) Mar 24 14:28:36.847: INFO: (10) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 5.242598ms) Mar 24 14:28:36.847: INFO: (10) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 5.370128ms) Mar 24 14:28:36.847: INFO: (10) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 5.401467ms) Mar 24 14:28:36.849: INFO: (11) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 2.467323ms) Mar 24 14:28:36.851: INFO: (11) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 3.912376ms) Mar 24 14:28:36.851: INFO: (11) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 3.924394ms) Mar 24 14:28:36.851: INFO: (11) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.01575ms) Mar 24 14:28:36.852: INFO: (11) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 4.971239ms) Mar 24 14:28:36.852: INFO: (11) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 8.779443ms) Mar 24 14:28:36.856: INFO: (11) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 8.748413ms) Mar 24 14:28:36.858: INFO: (12) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 1.952291ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 3.63974ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 4.209744ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.045269ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 3.887046ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 4.118083ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 3.979414ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.966008ms) Mar 24 14:28:36.860: INFO: (12) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.471072ms) Mar 24 14:28:36.861: INFO: (12) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 5.106269ms) Mar 24 14:28:36.861: INFO: (12) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 5.288242ms) Mar 24 14:28:36.861: INFO: (12) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 5.532956ms) Mar 24 14:28:36.861: INFO: (12) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 5.526144ms) Mar 24 14:28:36.861: INFO: (12) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 5.470545ms) Mar 24 14:28:36.864: INFO: (13) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 2.396564ms) Mar 24 14:28:36.864: INFO: (13) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 2.433186ms) Mar 24 14:28:36.864: INFO: (13) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 2.413359ms) Mar 24 14:28:36.864: INFO: (13) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 2.641476ms) Mar 24 14:28:36.864: INFO: (13) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 2.782791ms) Mar 24 14:28:36.865: INFO: (13) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 3.153117ms) Mar 24 14:28:36.865: INFO: (13) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 3.209766ms) Mar 24 14:28:36.865: INFO: (13) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 3.365157ms) Mar 24 14:28:36.865: INFO: (13) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 4.380071ms) Mar 24 14:28:36.870: INFO: (14) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 4.421717ms) Mar 24 14:28:36.870: INFO: (14) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 4.386553ms) Mar 24 14:28:36.870: INFO: (14) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.422801ms) Mar 24 14:28:36.871: INFO: (14) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 4.625452ms) Mar 24 14:28:36.871: INFO: (14) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.614551ms) Mar 24 14:28:36.871: INFO: (14) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.662314ms) Mar 24 14:28:36.871: INFO: (14) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 4.682733ms) Mar 24 14:28:36.871: INFO: (14) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 3.347311ms) Mar 24 14:28:36.874: INFO: (15) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 3.369908ms) Mar 24 14:28:36.874: INFO: (15) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 3.431452ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.794634ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.996956ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 3.960339ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test (200; 3.994975ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 4.227978ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 4.202019ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 4.26202ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.254608ms) Mar 24 14:28:36.875: INFO: (15) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 4.496957ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 3.401486ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 3.583453ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.68183ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 3.676382ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.924045ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.151465ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 4.179551ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname2/proxy/: bar (200; 4.085208ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.127798ms) Mar 24 14:28:36.879: INFO: (16) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 3.242398ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.238379ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 3.557448ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 3.511894ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 3.551981ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 3.538767ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:460/proxy/: tls baz (200; 3.639737ms) Mar 24 14:28:36.883: INFO: (17) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 3.639418ms) Mar 24 14:28:36.884: INFO: (17) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.184195ms) Mar 24 14:28:36.884: INFO: (17) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: test<... (200; 3.986954ms) Mar 24 14:28:36.888: INFO: (18) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 4.052716ms) Mar 24 14:28:36.888: INFO: (18) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:462/proxy/: tls qux (200; 4.03049ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.069481ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 4.055102ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:1080/proxy/: ... (200; 4.0955ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 4.140601ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/services/http:proxy-service-5jmfm:portname1/proxy/: foo (200; 4.385879ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 4.439949ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname2/proxy/: bar (200; 4.465698ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/services/proxy-service-5jmfm:portname1/proxy/: foo (200; 4.464231ms) Mar 24 14:28:36.889: INFO: (18) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname1/proxy/: tls baz (200; 4.475254ms) Mar 24 14:28:36.892: INFO: (19) /api/v1/namespaces/proxy-4294/pods/https:proxy-service-5jmfm-4zlqp:443/proxy/: ... (200; 5.153824ms) Mar 24 14:28:36.894: INFO: (19) /api/v1/namespaces/proxy-4294/pods/http:proxy-service-5jmfm-4zlqp:162/proxy/: bar (200; 5.136724ms) Mar 24 14:28:36.894: INFO: (19) /api/v1/namespaces/proxy-4294/services/https:proxy-service-5jmfm:tlsportname2/proxy/: tls qux (200; 5.178029ms) Mar 24 14:28:36.894: INFO: (19) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:1080/proxy/: test<... (200; 5.111946ms) Mar 24 14:28:36.894: INFO: (19) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp/proxy/: test (200; 5.216704ms) Mar 24 14:28:36.894: INFO: (19) /api/v1/namespaces/proxy-4294/pods/proxy-service-5jmfm-4zlqp:160/proxy/: foo (200; 5.239469ms) STEP: deleting ReplicationController proxy-service-5jmfm in namespace proxy-4294, will wait for the garbage collector to delete the pods Mar 24 14:28:36.953: INFO: Deleting ReplicationController proxy-service-5jmfm took: 6.8533ms Mar 24 14:28:37.253: INFO: Terminating ReplicationController proxy-service-5jmfm pods took: 300.235589ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:28:41.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4294" for this suite. Mar 24 14:28:47.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:28:48.062: INFO: namespace proxy-4294 deletion completed in 6.104629784s • [SLOW TEST:24.480 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:28:48.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 24 14:28:48.183: INFO: Waiting up to 5m0s for pod "pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5" in namespace "emptydir-6655" to be "success or failure" Mar 24 14:28:48.186: INFO: Pod "pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713969ms Mar 24 14:28:50.190: INFO: Pod "pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006714015s Mar 24 14:28:52.194: INFO: Pod "pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010324437s STEP: Saw pod success Mar 24 14:28:52.194: INFO: Pod "pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5" satisfied condition "success or failure" Mar 24 14:28:52.196: INFO: Trying to get logs from node iruya-worker pod pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5 container test-container: STEP: delete the pod Mar 24 14:28:52.211: INFO: Waiting for pod pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5 to disappear Mar 24 14:28:52.216: INFO: Pod pod-ba92b466-ee4d-4b77-8461-7d8343d37fc5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:28:52.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6655" for this suite. Mar 24 14:28:58.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:28:58.325: INFO: namespace emptydir-6655 deletion completed in 6.106459114s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:28:58.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:29:02.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6356" for this suite. Mar 24 14:29:08.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:29:08.505: INFO: namespace kubelet-test-6356 deletion completed in 6.11202911s • [SLOW TEST:10.179 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:29:08.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 24 14:29:08.620: INFO: Waiting up to 5m0s for pod "downward-api-4a138da7-682e-416a-ab25-45620d881cfb" in namespace "downward-api-5779" to be "success or failure" Mar 24 14:29:08.623: INFO: Pod "downward-api-4a138da7-682e-416a-ab25-45620d881cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946547ms Mar 24 14:29:10.644: INFO: Pod "downward-api-4a138da7-682e-416a-ab25-45620d881cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024145782s Mar 24 14:29:12.649: INFO: Pod "downward-api-4a138da7-682e-416a-ab25-45620d881cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028731888s STEP: Saw pod success Mar 24 14:29:12.649: INFO: Pod "downward-api-4a138da7-682e-416a-ab25-45620d881cfb" satisfied condition "success or failure" Mar 24 14:29:12.652: INFO: Trying to get logs from node iruya-worker pod downward-api-4a138da7-682e-416a-ab25-45620d881cfb container dapi-container: STEP: delete the pod Mar 24 14:29:12.676: INFO: Waiting for pod downward-api-4a138da7-682e-416a-ab25-45620d881cfb to disappear Mar 24 14:29:12.680: INFO: Pod downward-api-4a138da7-682e-416a-ab25-45620d881cfb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:29:12.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5779" for this suite. Mar 24 14:29:18.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:29:18.807: INFO: namespace downward-api-5779 deletion completed in 6.120017233s • [SLOW TEST:10.302 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:29:18.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 24 14:29:24.003: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:29:25.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3982" for this suite. Mar 24 14:29:47.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:29:47.167: INFO: namespace replicaset-3982 deletion completed in 22.094752633s • [SLOW TEST:28.360 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:29:47.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:30:05.236: INFO: Container started at 2020-03-24 14:29:49 +0000 UTC, pod became ready at 2020-03-24 14:30:04 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:30:05.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5077" for this suite. Mar 24 14:30:27.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:30:27.344: INFO: namespace container-probe-5077 deletion completed in 22.104967885s • [SLOW TEST:40.177 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:30:27.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 14:30:30.514: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:30:30.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8375" for this suite. Mar 24 14:30:36.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:30:36.657: INFO: namespace container-runtime-8375 deletion completed in 6.111820977s • [SLOW TEST:9.311 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:30:36.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 24 14:30:37.338: INFO: Pod name wrapped-volume-race-34bd39d4-ff94-4141-90f3-485aee5b988b: Found 0 pods out of 5 Mar 24 14:30:42.346: INFO: Pod name wrapped-volume-race-34bd39d4-ff94-4141-90f3-485aee5b988b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-34bd39d4-ff94-4141-90f3-485aee5b988b in namespace emptydir-wrapper-3593, will wait for the garbage collector to delete the pods Mar 24 14:30:56.432: INFO: Deleting ReplicationController wrapped-volume-race-34bd39d4-ff94-4141-90f3-485aee5b988b took: 11.150298ms Mar 24 14:30:56.732: INFO: Terminating ReplicationController wrapped-volume-race-34bd39d4-ff94-4141-90f3-485aee5b988b pods took: 300.335567ms STEP: Creating RC which spawns configmap-volume pods Mar 24 14:31:34.503: INFO: Pod name wrapped-volume-race-fee63d8b-8c7b-47df-937f-93e32cc6f7e5: Found 0 pods out of 5 Mar 24 14:31:39.509: INFO: Pod name wrapped-volume-race-fee63d8b-8c7b-47df-937f-93e32cc6f7e5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fee63d8b-8c7b-47df-937f-93e32cc6f7e5 in namespace emptydir-wrapper-3593, will wait for the garbage collector to delete the pods Mar 24 14:31:53.596: INFO: Deleting ReplicationController wrapped-volume-race-fee63d8b-8c7b-47df-937f-93e32cc6f7e5 took: 7.08115ms Mar 24 14:31:53.896: INFO: Terminating ReplicationController wrapped-volume-race-fee63d8b-8c7b-47df-937f-93e32cc6f7e5 pods took: 300.277002ms STEP: Creating RC which spawns configmap-volume pods Mar 24 14:32:32.324: INFO: Pod name wrapped-volume-race-e07dc0cc-50de-4c84-8f0a-971624bfc922: Found 0 pods out of 5 Mar 24 14:32:37.338: INFO: Pod name wrapped-volume-race-e07dc0cc-50de-4c84-8f0a-971624bfc922: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e07dc0cc-50de-4c84-8f0a-971624bfc922 in namespace emptydir-wrapper-3593, will wait for the garbage collector to delete the pods Mar 24 14:32:51.426: INFO: Deleting ReplicationController wrapped-volume-race-e07dc0cc-50de-4c84-8f0a-971624bfc922 took: 6.516783ms Mar 24 14:32:51.726: INFO: Terminating ReplicationController wrapped-volume-race-e07dc0cc-50de-4c84-8f0a-971624bfc922 pods took: 300.358833ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:33:32.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3593" for this suite. Mar 24 14:33:40.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:33:40.993: INFO: namespace emptydir-wrapper-3593 deletion completed in 8.098460261s • [SLOW TEST:184.336 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:33:40.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4992 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4992 to expose endpoints map[] Mar 24 14:33:41.101: INFO: Get endpoints failed (18.449668ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 24 14:33:42.105: INFO: successfully validated that service multi-endpoint-test in namespace services-4992 exposes endpoints map[] (1.022751053s elapsed) STEP: Creating pod pod1 in namespace services-4992 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4992 to expose endpoints map[pod1:[100]] Mar 24 14:33:46.435: INFO: successfully validated that service multi-endpoint-test in namespace services-4992 exposes endpoints map[pod1:[100]] (4.322601838s elapsed) STEP: Creating pod pod2 in namespace services-4992 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4992 to expose endpoints map[pod1:[100] pod2:[101]] Mar 24 14:33:49.570: INFO: successfully validated that service multi-endpoint-test in namespace services-4992 exposes endpoints map[pod1:[100] pod2:[101]] (3.131963072s elapsed) STEP: Deleting pod pod1 in namespace services-4992 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4992 to expose endpoints map[pod2:[101]] Mar 24 14:33:50.607: INFO: successfully validated that service multi-endpoint-test in namespace services-4992 exposes endpoints map[pod2:[101]] (1.032346404s elapsed) STEP: Deleting pod pod2 in namespace services-4992 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4992 to expose endpoints map[] Mar 24 14:33:51.623: INFO: successfully validated that service multi-endpoint-test in namespace services-4992 exposes endpoints map[] (1.010072446s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:33:51.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4992" for this suite. Mar 24 14:34:13.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:34:13.748: INFO: namespace services-4992 deletion completed in 22.098881004s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.755 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:34:13.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-79c9eaf6-145d-4ddc-883b-7d7bc56442b8 STEP: Creating a pod to test consume secrets Mar 24 14:34:13.843: INFO: Waiting up to 5m0s for pod "pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09" in namespace "secrets-1738" to be "success or failure" Mar 24 14:34:13.858: INFO: Pod "pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09": Phase="Pending", Reason="", readiness=false. Elapsed: 15.941023ms Mar 24 14:34:15.862: INFO: Pod "pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019650096s Mar 24 14:34:17.866: INFO: Pod "pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023396302s STEP: Saw pod success Mar 24 14:34:17.866: INFO: Pod "pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09" satisfied condition "success or failure" Mar 24 14:34:17.868: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09 container secret-volume-test: STEP: delete the pod Mar 24 14:34:17.899: INFO: Waiting for pod pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09 to disappear Mar 24 14:34:17.914: INFO: Pod pod-secrets-6fdabf54-48a0-4f74-bd0d-baa388358f09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:34:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1738" for this suite. Mar 24 14:34:23.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:34:24.009: INFO: namespace secrets-1738 deletion completed in 6.091381426s • [SLOW TEST:10.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:34:24.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-7821cc0f-5219-4939-b9ab-bbc2d0b638ab STEP: Creating a pod to test consume configMaps Mar 24 14:34:24.075: INFO: Waiting up to 5m0s for pod "pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0" in namespace "configmap-8758" to be "success or failure" Mar 24 14:34:24.088: INFO: Pod "pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.649518ms Mar 24 14:34:26.092: INFO: Pod "pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016682594s Mar 24 14:34:28.096: INFO: Pod "pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020723338s STEP: Saw pod success Mar 24 14:34:28.096: INFO: Pod "pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0" satisfied condition "success or failure" Mar 24 14:34:28.099: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0 container configmap-volume-test: STEP: delete the pod Mar 24 14:34:28.131: INFO: Waiting for pod pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0 to disappear Mar 24 14:34:28.142: INFO: Pod pod-configmaps-968658a8-1b2d-46cd-bec9-b5338afc77f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:34:28.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8758" for this suite. Mar 24 14:34:34.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:34:34.242: INFO: namespace configmap-8758 deletion completed in 6.096238465s • [SLOW TEST:10.232 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:34:34.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 24 14:34:34.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9334' Mar 24 14:34:37.135: INFO: stderr: "" Mar 24 14:34:37.135: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 24 14:34:38.141: INFO: Selector matched 1 pods for map[app:redis] Mar 24 14:34:38.141: INFO: Found 0 / 1 Mar 24 14:34:39.138: INFO: Selector matched 1 pods for map[app:redis] Mar 24 14:34:39.138: INFO: Found 0 / 1 Mar 24 14:34:40.139: INFO: Selector matched 1 pods for map[app:redis] Mar 24 14:34:40.139: INFO: Found 0 / 1 Mar 24 14:34:41.140: INFO: Selector matched 1 pods for map[app:redis] Mar 24 14:34:41.140: INFO: Found 1 / 1 Mar 24 14:34:41.140: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 24 14:34:41.143: INFO: Selector matched 1 pods for map[app:redis] Mar 24 14:34:41.143: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 24 14:34:41.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334' Mar 24 14:34:41.247: INFO: stderr: "" Mar 24 14:34:41.247: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Mar 14:34:39.845 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Mar 14:34:39.846 # Server started, Redis version 3.2.12\n1:M 24 Mar 14:34:39.846 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Mar 14:34:39.846 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 24 14:34:41.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334 --tail=1' Mar 24 14:34:41.350: INFO: stderr: "" Mar 24 14:34:41.350: INFO: stdout: "1:M 24 Mar 14:34:39.846 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 24 14:34:41.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334 --limit-bytes=1' Mar 24 14:34:41.451: INFO: stderr: "" Mar 24 14:34:41.451: INFO: stdout: " " STEP: exposing timestamps Mar 24 14:34:41.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334 --tail=1 --timestamps' Mar 24 14:34:41.553: INFO: stderr: "" Mar 24 14:34:41.553: INFO: stdout: "2020-03-24T14:34:39.846423038Z 1:M 24 Mar 14:34:39.846 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 24 14:34:44.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334 --since=1s' Mar 24 14:34:44.154: INFO: stderr: "" Mar 24 14:34:44.154: INFO: stdout: "" Mar 24 14:34:44.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-98rzq redis-master --namespace=kubectl-9334 --since=24h' Mar 24 14:34:44.256: INFO: stderr: "" Mar 24 14:34:44.256: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Mar 14:34:39.845 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Mar 14:34:39.846 # Server started, Redis version 3.2.12\n1:M 24 Mar 14:34:39.846 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Mar 14:34:39.846 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 24 14:34:44.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9334' Mar 24 14:34:44.366: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 14:34:44.366: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 24 14:34:44.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9334' Mar 24 14:34:44.463: INFO: stderr: "No resources found.\n" Mar 24 14:34:44.463: INFO: stdout: "" Mar 24 14:34:44.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9334 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 14:34:44.545: INFO: stderr: "" Mar 24 14:34:44.545: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:34:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9334" for this suite. Mar 24 14:35:06.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:35:06.680: INFO: namespace kubectl-9334 deletion completed in 22.131277275s • [SLOW TEST:32.438 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:35:06.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 24 14:35:11.287: INFO: Successfully updated pod "labelsupdatecbf07645-546b-460e-9c89-e7a2addc5255" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:35:13.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1180" for this suite. Mar 24 14:35:35.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:35:35.424: INFO: namespace projected-1180 deletion completed in 22.101078387s • [SLOW TEST:28.743 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:35:35.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-d4ba45f3-d4f7-4f5e-93d0-f92951003acd STEP: Creating secret with name secret-projected-all-test-volume-37f6361a-38ff-4591-bcf6-a31599c1b98d STEP: Creating a pod to test Check all projections for projected volume plugin Mar 24 14:35:35.509: INFO: Waiting up to 5m0s for pod "projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de" in namespace "projected-9205" to be "success or failure" Mar 24 14:35:35.513: INFO: Pod "projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341545ms Mar 24 14:35:37.517: INFO: Pod "projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789487s Mar 24 14:35:39.522: INFO: Pod "projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012272217s STEP: Saw pod success Mar 24 14:35:39.522: INFO: Pod "projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de" satisfied condition "success or failure" Mar 24 14:35:39.525: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de container projected-all-volume-test: STEP: delete the pod Mar 24 14:35:39.556: INFO: Waiting for pod projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de to disappear Mar 24 14:35:39.572: INFO: Pod projected-volume-b17f1ace-eb66-4928-984a-fe0e16a1d1de no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:35:39.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9205" for this suite. Mar 24 14:35:45.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:35:45.672: INFO: namespace projected-9205 deletion completed in 6.096959434s • [SLOW TEST:10.247 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:35:45.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 24 14:35:45.731: INFO: Waiting up to 5m0s for pod "pod-3336e76a-c239-4d34-ae87-3a27913891ac" in namespace "emptydir-7753" to be "success or failure" Mar 24 14:35:45.735: INFO: Pod "pod-3336e76a-c239-4d34-ae87-3a27913891ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134886ms Mar 24 14:35:47.739: INFO: Pod "pod-3336e76a-c239-4d34-ae87-3a27913891ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00848657s Mar 24 14:35:49.744: INFO: Pod "pod-3336e76a-c239-4d34-ae87-3a27913891ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013101542s STEP: Saw pod success Mar 24 14:35:49.744: INFO: Pod "pod-3336e76a-c239-4d34-ae87-3a27913891ac" satisfied condition "success or failure" Mar 24 14:35:49.747: INFO: Trying to get logs from node iruya-worker pod pod-3336e76a-c239-4d34-ae87-3a27913891ac container test-container: STEP: delete the pod Mar 24 14:35:49.766: INFO: Waiting for pod pod-3336e76a-c239-4d34-ae87-3a27913891ac to disappear Mar 24 14:35:49.791: INFO: Pod pod-3336e76a-c239-4d34-ae87-3a27913891ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:35:49.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7753" for this suite. Mar 24 14:35:55.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:35:55.902: INFO: namespace emptydir-7753 deletion completed in 6.107103705s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:35:55.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 24 14:35:55.996: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612671,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 24 14:35:55.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612672,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 24 14:35:55.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612673,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 24 14:36:06.037: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612694,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 24 14:36:06.037: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612695,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 24 14:36:06.037: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-label-changed,UID:d6ec58a0-8b61-4d86-92ed-a7c27913a541,ResourceVersion:1612696,Generation:0,CreationTimestamp:2020-03-24 14:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:36:06.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3163" for this suite. Mar 24 14:36:12.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:36:12.136: INFO: namespace watch-3163 deletion completed in 6.094182625s • [SLOW TEST:16.234 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:36:12.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 24 14:36:12.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2060' Mar 24 14:36:12.415: INFO: stderr: "" Mar 24 14:36:12.415: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 14:36:12.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2060' Mar 24 14:36:12.521: INFO: stderr: "" Mar 24 14:36:12.521: INFO: stdout: "update-demo-nautilus-27g7g update-demo-nautilus-gnv7b " Mar 24 14:36:12.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27g7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2060' Mar 24 14:36:12.600: INFO: stderr: "" Mar 24 14:36:12.600: INFO: stdout: "" Mar 24 14:36:12.600: INFO: update-demo-nautilus-27g7g is created but not running Mar 24 14:36:17.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2060' Mar 24 14:36:17.693: INFO: stderr: "" Mar 24 14:36:17.693: INFO: stdout: "update-demo-nautilus-27g7g update-demo-nautilus-gnv7b " Mar 24 14:36:17.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27g7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2060' Mar 24 14:36:17.782: INFO: stderr: "" Mar 24 14:36:17.782: INFO: stdout: "true" Mar 24 14:36:17.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27g7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2060' Mar 24 14:36:17.876: INFO: stderr: "" Mar 24 14:36:17.876: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 14:36:17.876: INFO: validating pod update-demo-nautilus-27g7g Mar 24 14:36:17.880: INFO: got data: { "image": "nautilus.jpg" } Mar 24 14:36:17.880: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 14:36:17.880: INFO: update-demo-nautilus-27g7g is verified up and running Mar 24 14:36:17.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gnv7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2060' Mar 24 14:36:17.968: INFO: stderr: "" Mar 24 14:36:17.968: INFO: stdout: "true" Mar 24 14:36:17.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gnv7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2060' Mar 24 14:36:18.057: INFO: stderr: "" Mar 24 14:36:18.057: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 14:36:18.057: INFO: validating pod update-demo-nautilus-gnv7b Mar 24 14:36:18.061: INFO: got data: { "image": "nautilus.jpg" } Mar 24 14:36:18.061: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 14:36:18.061: INFO: update-demo-nautilus-gnv7b is verified up and running STEP: using delete to clean up resources Mar 24 14:36:18.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2060' Mar 24 14:36:18.175: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 14:36:18.175: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 24 14:36:18.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2060' Mar 24 14:36:18.283: INFO: stderr: "No resources found.\n" Mar 24 14:36:18.283: INFO: stdout: "" Mar 24 14:36:18.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2060 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 14:36:18.427: INFO: stderr: "" Mar 24 14:36:18.427: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:36:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2060" for this suite. Mar 24 14:36:40.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:36:40.545: INFO: namespace kubectl-2060 deletion completed in 22.093522181s • [SLOW TEST:28.408 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:36:40.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-8m9w STEP: Creating a pod to test atomic-volume-subpath Mar 24 14:36:40.625: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8m9w" in namespace "subpath-8403" to be "success or failure" Mar 24 14:36:40.628: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263353ms Mar 24 14:36:42.633: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007749745s Mar 24 14:36:44.637: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 4.011885167s Mar 24 14:36:46.641: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 6.016191947s Mar 24 14:36:48.646: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 8.02044068s Mar 24 14:36:50.650: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 10.024387041s Mar 24 14:36:52.654: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 12.029019169s Mar 24 14:36:54.658: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 14.033113602s Mar 24 14:36:56.663: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 16.037498797s Mar 24 14:36:58.667: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 18.041677822s Mar 24 14:37:00.671: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 20.045509758s Mar 24 14:37:02.675: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Running", Reason="", readiness=true. Elapsed: 22.050281908s Mar 24 14:37:04.679: INFO: Pod "pod-subpath-test-secret-8m9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054218485s STEP: Saw pod success Mar 24 14:37:04.679: INFO: Pod "pod-subpath-test-secret-8m9w" satisfied condition "success or failure" Mar 24 14:37:04.683: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-8m9w container test-container-subpath-secret-8m9w: STEP: delete the pod Mar 24 14:37:04.702: INFO: Waiting for pod pod-subpath-test-secret-8m9w to disappear Mar 24 14:37:04.706: INFO: Pod pod-subpath-test-secret-8m9w no longer exists STEP: Deleting pod pod-subpath-test-secret-8m9w Mar 24 14:37:04.706: INFO: Deleting pod "pod-subpath-test-secret-8m9w" in namespace "subpath-8403" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:37:04.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8403" for this suite. Mar 24 14:37:10.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:37:10.821: INFO: namespace subpath-8403 deletion completed in 6.109572374s • [SLOW TEST:30.276 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:37:10.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 24 14:37:10.916: INFO: Waiting up to 5m0s for pod "pod-ebafa004-0b88-4f93-bbef-10312043fd6b" in namespace "emptydir-6290" to be "success or failure" Mar 24 14:37:10.918: INFO: Pod "pod-ebafa004-0b88-4f93-bbef-10312043fd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0067ms Mar 24 14:37:12.922: INFO: Pod "pod-ebafa004-0b88-4f93-bbef-10312043fd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006499916s Mar 24 14:37:14.926: INFO: Pod "pod-ebafa004-0b88-4f93-bbef-10312043fd6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010500763s STEP: Saw pod success Mar 24 14:37:14.926: INFO: Pod "pod-ebafa004-0b88-4f93-bbef-10312043fd6b" satisfied condition "success or failure" Mar 24 14:37:14.928: INFO: Trying to get logs from node iruya-worker2 pod pod-ebafa004-0b88-4f93-bbef-10312043fd6b container test-container: STEP: delete the pod Mar 24 14:37:14.941: INFO: Waiting for pod pod-ebafa004-0b88-4f93-bbef-10312043fd6b to disappear Mar 24 14:37:14.946: INFO: Pod pod-ebafa004-0b88-4f93-bbef-10312043fd6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:37:14.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6290" for this suite. Mar 24 14:37:20.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:37:21.046: INFO: namespace emptydir-6290 deletion completed in 6.097144804s • [SLOW TEST:10.225 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:37:21.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 24 14:37:29.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:29.180: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:31.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:31.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:33.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:33.184: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:35.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:35.184: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:37.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:37.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:39.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:39.184: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:41.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:41.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:43.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:43.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:45.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:45.186: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:47.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:47.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:49.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:49.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:51.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:51.185: INFO: Pod pod-with-poststart-exec-hook still exists Mar 24 14:37:53.180: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 24 14:37:53.185: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:37:53.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5751" for this suite. Mar 24 14:38:15.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:38:15.275: INFO: namespace container-lifecycle-hook-5751 deletion completed in 22.086364961s • [SLOW TEST:54.229 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:38:15.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 24 14:38:15.332: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6000" to be "success or failure" Mar 24 14:38:15.336: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984301ms Mar 24 14:38:17.341: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00841454s Mar 24 14:38:19.344: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011963054s STEP: Saw pod success Mar 24 14:38:19.344: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 24 14:38:19.346: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 24 14:38:19.446: INFO: Waiting for pod pod-host-path-test to disappear Mar 24 14:38:19.462: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:38:19.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6000" for this suite. Mar 24 14:38:25.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:38:25.562: INFO: namespace hostpath-6000 deletion completed in 6.097078564s • [SLOW TEST:10.287 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:38:25.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 24 14:38:30.169: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2f48cc85-bb42-40c6-8cbe-ec0d7e751217" Mar 24 14:38:30.169: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2f48cc85-bb42-40c6-8cbe-ec0d7e751217" in namespace "pods-382" to be "terminated due to deadline exceeded" Mar 24 14:38:30.175: INFO: Pod "pod-update-activedeadlineseconds-2f48cc85-bb42-40c6-8cbe-ec0d7e751217": Phase="Running", Reason="", readiness=true. Elapsed: 5.643519ms Mar 24 14:38:32.179: INFO: Pod "pod-update-activedeadlineseconds-2f48cc85-bb42-40c6-8cbe-ec0d7e751217": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010017962s Mar 24 14:38:32.179: INFO: Pod "pod-update-activedeadlineseconds-2f48cc85-bb42-40c6-8cbe-ec0d7e751217" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:38:32.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-382" for this suite. Mar 24 14:38:38.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:38:38.283: INFO: namespace pods-382 deletion completed in 6.099316942s • [SLOW TEST:12.720 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:38:38.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 24 14:38:38.345: INFO: Waiting up to 5m0s for pod "client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711" in namespace "containers-7911" to be "success or failure" Mar 24 14:38:38.349: INFO: Pod "client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711": Phase="Pending", Reason="", readiness=false. Elapsed: 3.811132ms Mar 24 14:38:40.365: INFO: Pod "client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020449115s Mar 24 14:38:42.370: INFO: Pod "client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024890732s STEP: Saw pod success Mar 24 14:38:42.370: INFO: Pod "client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711" satisfied condition "success or failure" Mar 24 14:38:42.373: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711 container test-container: STEP: delete the pod Mar 24 14:38:42.392: INFO: Waiting for pod client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711 to disappear Mar 24 14:38:42.397: INFO: Pod client-containers-5f9091de-5845-4e4c-af62-418fb3c0a711 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:38:42.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7911" for this suite. Mar 24 14:38:48.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:38:48.514: INFO: namespace containers-7911 deletion completed in 6.113796915s • [SLOW TEST:10.230 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:38:48.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0324 14:39:28.898773 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 14:39:28.898: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:39:28.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5891" for this suite. Mar 24 14:39:38.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:39:39.000: INFO: namespace gc-5891 deletion completed in 10.097710667s • [SLOW TEST:50.486 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:39:39.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:39:39.110: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 24 14:39:44.114: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 24 14:39:44.114: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 24 14:39:44.149: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5160,SelfLink:/apis/apps/v1/namespaces/deployment-5160/deployments/test-cleanup-deployment,UID:3ca7daf8-cf14-4f2f-baee-c9baad388d5f,ResourceVersion:1613556,Generation:1,CreationTimestamp:2020-03-24 14:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 24 14:39:44.168: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5160,SelfLink:/apis/apps/v1/namespaces/deployment-5160/replicasets/test-cleanup-deployment-55bbcbc84c,UID:397613b1-44ab-476a-b111-6c9a2b13f62b,ResourceVersion:1613558,Generation:1,CreationTimestamp:2020-03-24 14:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3ca7daf8-cf14-4f2f-baee-c9baad388d5f 0xc002658c17 0xc002658c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 14:39:44.168: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 24 14:39:44.169: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5160,SelfLink:/apis/apps/v1/namespaces/deployment-5160/replicasets/test-cleanup-controller,UID:17d0b347-e813-4161-8e03-443f4131889f,ResourceVersion:1613557,Generation:1,CreationTimestamp:2020-03-24 14:39:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3ca7daf8-cf14-4f2f-baee-c9baad388d5f 0xc002658a77 0xc002658a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 24 14:39:44.201: INFO: Pod "test-cleanup-controller-dgz4j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dgz4j,GenerateName:test-cleanup-controller-,Namespace:deployment-5160,SelfLink:/api/v1/namespaces/deployment-5160/pods/test-cleanup-controller-dgz4j,UID:0807380b-3060-45f7-9f10-47177cc91ae1,ResourceVersion:1613549,Generation:0,CreationTimestamp:2020-03-24 14:39:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 17d0b347-e813-4161-8e03-443f4131889f 0xc002659907 0xc002659908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dvkwr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dvkwr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dvkwr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002659980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026599a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:39:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:39:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:39:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:39:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.36,StartTime:2020-03-24 14:39:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:39:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://78ff1ef1c83ed3c9194e12511aa9c651154544cd4aa404f35f1260b45d23bc49}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:39:44.201: INFO: Pod "test-cleanup-deployment-55bbcbc84c-6pbr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-6pbr2,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5160,SelfLink:/api/v1/namespaces/deployment-5160/pods/test-cleanup-deployment-55bbcbc84c-6pbr2,UID:5f1bb3b6-27f4-44a8-8c96-350971a70a27,ResourceVersion:1613564,Generation:0,CreationTimestamp:2020-03-24 14:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 397613b1-44ab-476a-b111-6c9a2b13f62b 0xc002659a97 0xc002659a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dvkwr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dvkwr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dvkwr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002659b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002659b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:39:44 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:39:44.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5160" for this suite. Mar 24 14:39:50.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:39:50.328: INFO: namespace deployment-5160 deletion completed in 6.105486038s • [SLOW TEST:11.328 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:39:50.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:39:50.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6524" for this suite. Mar 24 14:39:56.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:39:56.559: INFO: namespace kubelet-test-6524 deletion completed in 6.094155198s • [SLOW TEST:6.231 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:39:56.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 24 14:40:04.719: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:04.722: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:06.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:06.726: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:08.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:08.729: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:10.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:10.726: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:12.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:12.726: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:14.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:14.726: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:16.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:16.728: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:18.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:18.728: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:20.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:20.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:22.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:22.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:24.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:24.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:26.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:26.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:28.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:28.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:30.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:30.727: INFO: Pod pod-with-prestop-exec-hook still exists Mar 24 14:40:32.722: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 24 14:40:32.727: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:40:32.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2053" for this suite. Mar 24 14:40:54.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:40:54.889: INFO: namespace container-lifecycle-hook-2053 deletion completed in 22.150547704s • [SLOW TEST:58.330 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:40:54.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 24 14:41:02.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:02.999: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 14:41:04.999: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:05.003: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 14:41:06.999: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:07.007: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 14:41:08.999: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:09.003: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 14:41:10.999: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:11.003: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 14:41:12.999: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 14:41:13.002: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:41:13.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9767" for this suite. Mar 24 14:41:35.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:41:35.121: INFO: namespace container-lifecycle-hook-9767 deletion completed in 22.114206719s • [SLOW TEST:40.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:41:35.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:41:35.246: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"992556e8-60d0-4751-8b8f-2fee47b3818e", Controller:(*bool)(0xc003d94022), BlockOwnerDeletion:(*bool)(0xc003d94023)}} Mar 24 14:41:35.275: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"984b3f78-974e-4946-ad9f-48705babfccf", Controller:(*bool)(0xc001c89f42), BlockOwnerDeletion:(*bool)(0xc001c89f43)}} Mar 24 14:41:35.286: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e6138c37-be0d-4499-8249-dd73c0e085e4", Controller:(*bool)(0xc003251a6a), BlockOwnerDeletion:(*bool)(0xc003251a6b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:41:40.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8685" for this suite. Mar 24 14:41:46.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:41:46.478: INFO: namespace gc-8685 deletion completed in 6.091939954s • [SLOW TEST:11.357 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:41:46.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:41:46.541: INFO: Creating deployment "nginx-deployment" Mar 24 14:41:46.545: INFO: Waiting for observed generation 1 Mar 24 14:41:48.570: INFO: Waiting for all required pods to come up Mar 24 14:41:48.575: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 24 14:41:56.584: INFO: Waiting for deployment "nginx-deployment" to complete Mar 24 14:41:56.590: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 24 14:41:56.597: INFO: Updating deployment nginx-deployment Mar 24 14:41:56.597: INFO: Waiting for observed generation 2 Mar 24 14:41:58.609: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 24 14:41:58.615: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 24 14:41:58.618: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 24 14:41:58.625: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 24 14:41:58.626: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 24 14:41:58.627: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 24 14:41:58.631: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 24 14:41:58.631: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 24 14:41:58.636: INFO: Updating deployment nginx-deployment Mar 24 14:41:58.636: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 24 14:41:58.682: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 24 14:41:58.738: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 24 14:41:58.878: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3404,SelfLink:/apis/apps/v1/namespaces/deployment-3404/deployments/nginx-deployment,UID:591b0d51-dca2-4bb8-a7ee-32cd39c7189d,ResourceVersion:1614211,Generation:3,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-24 14:41:57 +0000 UTC 2020-03-24 14:41:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-24 14:41:58 +0000 UTC 2020-03-24 14:41:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 24 14:41:58.933: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3404,SelfLink:/apis/apps/v1/namespaces/deployment-3404/replicasets/nginx-deployment-55fb7cb77f,UID:71068082-66e8-4a7f-954f-24a680e7b9f2,ResourceVersion:1614249,Generation:3,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 591b0d51-dca2-4bb8-a7ee-32cd39c7189d 0xc002aa1377 0xc002aa1378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 24 14:41:58.933: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 24 14:41:58.933: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3404,SelfLink:/apis/apps/v1/namespaces/deployment-3404/replicasets/nginx-deployment-7b8c6f4498,UID:83576fb9-4f49-495d-8860-ee3f827543c7,ResourceVersion:1614244,Generation:3,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 591b0d51-dca2-4bb8-a7ee-32cd39c7189d 0xc002aa1447 0xc002aa1448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 24 14:41:59.090: INFO: Pod "nginx-deployment-55fb7cb77f-57qhn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-57qhn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-57qhn,UID:e119b165-2a1c-4948-aaa3-03512f677c97,ResourceVersion:1614234,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230cbd7 0xc00230cbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230cc70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230cc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.090: INFO: Pod "nginx-deployment-55fb7cb77f-6rdn2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6rdn2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-6rdn2,UID:629a7370-ce3e-41f6-930c-aa32ff932c12,ResourceVersion:1614247,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230cd27 0xc00230cd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230cda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230cdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.090: INFO: Pod "nginx-deployment-55fb7cb77f-9drvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9drvn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-9drvn,UID:74f1dc05-44bb-4d7b-96a6-672b9593b1ef,ResourceVersion:1614230,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230ce57 0xc00230ce58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230cee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230cf00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.091: INFO: Pod "nginx-deployment-55fb7cb77f-djjxm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-djjxm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-djjxm,UID:ab174c92-1ca3-4ce2-8a63-a3345eb414e6,ResourceVersion:1614170,Generation:0,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230cf97 0xc00230cf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d010} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-24 14:41:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.091: INFO: Pod "nginx-deployment-55fb7cb77f-fsbf9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fsbf9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-fsbf9,UID:67378310-322e-403b-b757-35cb552c994a,ResourceVersion:1614183,Generation:0,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d100 0xc00230d101}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-24 14:41:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.091: INFO: Pod "nginx-deployment-55fb7cb77f-fw45b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fw45b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-fw45b,UID:4fd54dd7-701d-4967-89b1-a8b94c40da74,ResourceVersion:1614250,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d2a0 0xc00230d2a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d330} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-24 14:41:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.091: INFO: Pod "nginx-deployment-55fb7cb77f-jd5n6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jd5n6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-jd5n6,UID:021008ee-5924-4d58-9161-ea809f155b8a,ResourceVersion:1614217,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d420 0xc00230d421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.091: INFO: Pod "nginx-deployment-55fb7cb77f-kvbjn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kvbjn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-kvbjn,UID:f78174cd-c1f3-4049-b54d-e44ef61332e1,ResourceVersion:1614160,Generation:0,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d547 0xc00230d548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-24 14:41:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.092: INFO: Pod "nginx-deployment-55fb7cb77f-lg7ns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lg7ns,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-lg7ns,UID:3c56fdd4-395a-4950-a357-1587c0b7892d,ResourceVersion:1614155,Generation:0,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d6b0 0xc00230d6b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d730} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-24 14:41:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.092: INFO: Pod "nginx-deployment-55fb7cb77f-n5g2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n5g2g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-n5g2g,UID:ecd415cc-c86e-4ed3-9caf-f0d8a8bcc41e,ResourceVersion:1614240,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d820 0xc00230d821}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.092: INFO: Pod "nginx-deployment-55fb7cb77f-qndkw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qndkw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-qndkw,UID:5433c51a-5354-44e6-963e-ca61df25fc67,ResourceVersion:1614236,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230d947 0xc00230d948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230d9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230d9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.092: INFO: Pod "nginx-deployment-55fb7cb77f-rshjz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rshjz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-rshjz,UID:36c56452-1f91-4f55-885d-4799af3b4cdd,ResourceVersion:1614235,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230da67 0xc00230da68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230dae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230db00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.092: INFO: Pod "nginx-deployment-55fb7cb77f-zmtz9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zmtz9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-55fb7cb77f-zmtz9,UID:83fe0ff3-1afc-43b0-b5f4-11e239cedf28,ResourceVersion:1614185,Generation:0,CreationTimestamp:2020-03-24 14:41:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 71068082-66e8-4a7f-954f-24a680e7b9f2 0xc00230db87 0xc00230db88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230dc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230dc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-24 14:41:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.093: INFO: Pod "nginx-deployment-7b8c6f4498-4bbsv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4bbsv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-4bbsv,UID:dc0b1a57-6bf9-4533-9e75-267414aedf1d,ResourceVersion:1614239,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00230dcf0 0xc00230dcf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230dd60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230dd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.093: INFO: Pod "nginx-deployment-7b8c6f4498-58fmv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-58fmv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-58fmv,UID:94e24a00-b013-426d-9b91-75f22913095d,ResourceVersion:1614224,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00230de07 0xc00230de08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00230df90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00230dfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.093: INFO: Pod "nginx-deployment-7b8c6f4498-7cmmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7cmmf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-7cmmf,UID:4dd2211e-b1cf-4aa9-a331-2efd33cc0220,ResourceVersion:1614248,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c077 0xc00263c078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c140} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-24 14:41:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.093: INFO: Pod "nginx-deployment-7b8c6f4498-7hmnh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7hmnh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-7hmnh,UID:df3a9685-a725-45b6-821d-c6ac9030f5dd,ResourceVersion:1614109,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c227 0xc00263c228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.143,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d51f85599ba766b73ea01dc45bfe55984c9004ec40a37cc63929638834427262}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.093: INFO: Pod "nginx-deployment-7b8c6f4498-bq9qw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bq9qw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-bq9qw,UID:dd1f763d-37a4-4f13-a83f-222823077266,ResourceVersion:1614070,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c3c7 0xc00263c3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c440} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.139,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://174f6e722a6dc083a85bd8ee6518eb1ace496f3a5f0980d04d30a9b1ad21296e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.094: INFO: Pod "nginx-deployment-7b8c6f4498-flb6z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-flb6z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-flb6z,UID:96c0f5da-d9ea-435a-9ed1-ab2b0fee46bd,ResourceVersion:1614222,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c557 0xc00263c558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.094: INFO: Pod "nginx-deployment-7b8c6f4498-ggc7f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ggc7f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-ggc7f,UID:f9c3972e-a567-4886-8579-dc5821bc9b6e,ResourceVersion:1614086,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c677 0xc00263c678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c6f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.41,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4b72dd824b7c49b1f313c71e3f6e9be5bce605a13183093756d5478b6d67f315}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.094: INFO: Pod "nginx-deployment-7b8c6f4498-jhcdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jhcdz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-jhcdz,UID:f31b97cb-cec0-49e8-9eb7-32b49c3b44b2,ResourceVersion:1614241,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c7e7 0xc00263c7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.094: INFO: Pod "nginx-deployment-7b8c6f4498-lj67j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lj67j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-lj67j,UID:9c811a58-7aa2-4a62-9e1b-3fb6e69c2d89,ResourceVersion:1614238,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263c907 0xc00263c908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263c980} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263c9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.094: INFO: Pod "nginx-deployment-7b8c6f4498-nm7qt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nm7qt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-nm7qt,UID:3abc2674-d1cf-415f-99d3-c231865fd971,ResourceVersion:1614130,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263ca27 0xc00263ca28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263caa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263cac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.142,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://68cb3daef911a0eabba967fbf445212e686b5605b34e457456ef4efef95976ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-p5qbt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p5qbt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-p5qbt,UID:c230c475-fb2a-4086-a51c-84c0f288c15e,ResourceVersion:1614214,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263cb97 0xc00263cb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263cc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263cc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-pk9cn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pk9cn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-pk9cn,UID:457c7949-e07d-42ed-ad09-b81e200a57d2,ResourceVersion:1614212,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263ccb7 0xc00263ccb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263cd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263cd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-q6rtg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q6rtg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-q6rtg,UID:200b8b72-5da8-45f2-b333-1b52a1082cac,ResourceVersion:1614237,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263cdd7 0xc00263cdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263ce50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263ce70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-qrn6c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qrn6c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-qrn6c,UID:d66a95a8-e98e-4e8c-802b-44c6ae29cded,ResourceVersion:1614118,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263cef7 0xc00263cef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263cf70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263cf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.44,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d86d07083953782e5b0beeded32a6f9dfcff9dcc98ae4c2f7d512070e190cd8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-rdr2h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rdr2h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-rdr2h,UID:34739d2d-740d-4a82-9f83-8a017aae05b5,ResourceVersion:1614112,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d067 0xc00263d068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.141,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://93af7f2363a9a1d90aa77b52cec92d4d1dd019f149f50f908a62bbb445a0b7a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.095: INFO: Pod "nginx-deployment-7b8c6f4498-rh88n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rh88n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-rh88n,UID:cceea7fe-a492-4d02-9c3a-556bf049bb66,ResourceVersion:1614243,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d1d7 0xc00263d1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d250} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.096: INFO: Pod "nginx-deployment-7b8c6f4498-rxr2d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rxr2d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-rxr2d,UID:7becee6b-d5e0-4e59-a803-e46038f3cf59,ResourceVersion:1614089,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d2f7 0xc00263d2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.40,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b8f1169e94509303f29e27a0ede91512f2f10afe79ed20e4bad810710f822470}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.096: INFO: Pod "nginx-deployment-7b8c6f4498-stlxf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-stlxf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-stlxf,UID:e6c51f6c-2e2c-4f6f-948c-74a842f463d3,ResourceVersion:1614225,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d467 0xc00263d468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.096: INFO: Pod "nginx-deployment-7b8c6f4498-thknx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-thknx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-thknx,UID:a5d84351-af5c-4606-aea2-f2319f8751c5,ResourceVersion:1614231,Generation:0,CreationTimestamp:2020-03-24 14:41:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d587 0xc00263d588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d600} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 24 14:41:59.096: INFO: Pod "nginx-deployment-7b8c6f4498-zkbn6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zkbn6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3404,SelfLink:/api/v1/namespaces/deployment-3404/pods/nginx-deployment-7b8c6f4498-zkbn6,UID:f14e21b6-52fd-4193-b934-89bd6f3137bc,ResourceVersion:1614094,Generation:0,CreationTimestamp:2020-03-24 14:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 83576fb9-4f49-495d-8860-ee3f827543c7 0xc00263d6a7 0xc00263d6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4sfl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4sfl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4sfl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d720} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 14:41:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.140,StartTime:2020-03-24 14:41:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-24 14:41:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3776b00e7a8b8648c1e17c78383d0d36d5239a93123ea540bff8349cf693ad5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:41:59.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3404" for this suite. Mar 24 14:42:17.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:42:17.402: INFO: namespace deployment-3404 deletion completed in 18.259929501s • [SLOW TEST:30.922 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:42:17.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 24 14:42:17.463: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 24 14:42:19.564: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:42:20.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2426" for this suite. Mar 24 14:42:27.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:42:27.382: INFO: namespace replication-controller-2426 deletion completed in 6.804022127s • [SLOW TEST:9.980 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:42:27.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 24 14:42:27.464: INFO: Waiting up to 5m0s for pod "pod-cfae14a9-c911-486a-9015-9a3d7a9472d4" in namespace "emptydir-5738" to be "success or failure" Mar 24 14:42:27.468: INFO: Pod "pod-cfae14a9-c911-486a-9015-9a3d7a9472d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.826423ms Mar 24 14:42:29.472: INFO: Pod "pod-cfae14a9-c911-486a-9015-9a3d7a9472d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007890456s Mar 24 14:42:31.477: INFO: Pod "pod-cfae14a9-c911-486a-9015-9a3d7a9472d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012386309s STEP: Saw pod success Mar 24 14:42:31.477: INFO: Pod "pod-cfae14a9-c911-486a-9015-9a3d7a9472d4" satisfied condition "success or failure" Mar 24 14:42:31.479: INFO: Trying to get logs from node iruya-worker2 pod pod-cfae14a9-c911-486a-9015-9a3d7a9472d4 container test-container: STEP: delete the pod Mar 24 14:42:31.590: INFO: Waiting for pod pod-cfae14a9-c911-486a-9015-9a3d7a9472d4 to disappear Mar 24 14:42:31.654: INFO: Pod pod-cfae14a9-c911-486a-9015-9a3d7a9472d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:42:31.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5738" for this suite. Mar 24 14:42:37.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:42:37.745: INFO: namespace emptydir-5738 deletion completed in 6.086001813s • [SLOW TEST:10.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:42:37.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 24 14:42:37.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9" in namespace "projected-4410" to be "success or failure" Mar 24 14:42:37.803: INFO: Pod "downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.790104ms Mar 24 14:42:39.806: INFO: Pod "downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0244455s Mar 24 14:42:41.816: INFO: Pod "downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034130265s STEP: Saw pod success Mar 24 14:42:41.816: INFO: Pod "downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9" satisfied condition "success or failure" Mar 24 14:42:41.819: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9 container client-container: STEP: delete the pod Mar 24 14:42:41.884: INFO: Waiting for pod downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9 to disappear Mar 24 14:42:41.960: INFO: Pod downwardapi-volume-8a277a77-1264-4ef7-ad4c-4ff2c7dfade9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:42:41.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4410" for this suite. Mar 24 14:42:47.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:42:48.076: INFO: namespace projected-4410 deletion completed in 6.112178807s • [SLOW TEST:10.330 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:42:48.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 24 14:42:48.141: INFO: Waiting up to 5m0s for pod "pod-e9a835b0-6160-46b0-b56c-5e44ab693c00" in namespace "emptydir-3398" to be "success or failure" Mar 24 14:42:48.144: INFO: Pod "pod-e9a835b0-6160-46b0-b56c-5e44ab693c00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425711ms Mar 24 14:42:50.152: INFO: Pod "pod-e9a835b0-6160-46b0-b56c-5e44ab693c00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011392132s Mar 24 14:42:52.156: INFO: Pod "pod-e9a835b0-6160-46b0-b56c-5e44ab693c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015874181s STEP: Saw pod success Mar 24 14:42:52.157: INFO: Pod "pod-e9a835b0-6160-46b0-b56c-5e44ab693c00" satisfied condition "success or failure" Mar 24 14:42:52.160: INFO: Trying to get logs from node iruya-worker2 pod pod-e9a835b0-6160-46b0-b56c-5e44ab693c00 container test-container: STEP: delete the pod Mar 24 14:42:52.189: INFO: Waiting for pod pod-e9a835b0-6160-46b0-b56c-5e44ab693c00 to disappear Mar 24 14:42:52.204: INFO: Pod pod-e9a835b0-6160-46b0-b56c-5e44ab693c00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:42:52.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3398" for this suite. Mar 24 14:42:58.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:42:58.306: INFO: namespace emptydir-3398 deletion completed in 6.098231297s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:42:58.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 24 14:43:05.021: INFO: 0 pods remaining Mar 24 14:43:05.021: INFO: 0 pods has nil DeletionTimestamp Mar 24 14:43:05.021: INFO: Mar 24 14:43:05.865: INFO: 0 pods remaining Mar 24 14:43:05.865: INFO: 0 pods has nil DeletionTimestamp Mar 24 14:43:05.865: INFO: STEP: Gathering metrics W0324 14:43:06.811376 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 14:43:06.811: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:43:06.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8903" for this suite. Mar 24 14:43:12.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:43:13.005: INFO: namespace gc-8903 deletion completed in 6.158765854s • [SLOW TEST:14.699 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:43:13.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 24 14:43:13.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3440' Mar 24 14:43:13.375: INFO: stderr: "" Mar 24 14:43:13.375: INFO: stdout: "pod/pause created\n" Mar 24 14:43:13.375: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 24 14:43:13.375: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3440" to be "running and ready" Mar 24 14:43:13.463: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 87.927684ms Mar 24 14:43:15.467: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091831601s Mar 24 14:43:17.471: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.095740643s Mar 24 14:43:17.471: INFO: Pod "pause" satisfied condition "running and ready" Mar 24 14:43:17.471: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 24 14:43:17.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3440' Mar 24 14:43:17.583: INFO: stderr: "" Mar 24 14:43:17.583: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 24 14:43:17.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3440' Mar 24 14:43:17.702: INFO: stderr: "" Mar 24 14:43:17.702: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 24 14:43:17.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3440' Mar 24 14:43:17.808: INFO: stderr: "" Mar 24 14:43:17.808: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 24 14:43:17.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3440' Mar 24 14:43:17.896: INFO: stderr: "" Mar 24 14:43:17.896: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 24 14:43:17.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3440' Mar 24 14:43:17.999: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 14:43:17.999: INFO: stdout: "pod \"pause\" force deleted\n" Mar 24 14:43:17.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3440' Mar 24 14:43:18.102: INFO: stderr: "No resources found.\n" Mar 24 14:43:18.102: INFO: stdout: "" Mar 24 14:43:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3440 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 14:43:18.198: INFO: stderr: "" Mar 24 14:43:18.198: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:43:18.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3440" for this suite. Mar 24 14:43:24.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:43:24.325: INFO: namespace kubectl-3440 deletion completed in 6.123131281s • [SLOW TEST:11.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:43:24.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:43:29.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2090" for this suite. Mar 24 14:43:35.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:43:36.066: INFO: namespace watch-2090 deletion completed in 6.182263151s • [SLOW TEST:11.741 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:43:36.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 24 14:43:36.143: INFO: Waiting up to 5m0s for pod "pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0" in namespace "emptydir-2766" to be "success or failure" Mar 24 14:43:36.161: INFO: Pod "pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.637205ms Mar 24 14:43:38.166: INFO: Pod "pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02282438s Mar 24 14:43:40.170: INFO: Pod "pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027004417s STEP: Saw pod success Mar 24 14:43:40.170: INFO: Pod "pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0" satisfied condition "success or failure" Mar 24 14:43:40.173: INFO: Trying to get logs from node iruya-worker2 pod pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0 container test-container: STEP: delete the pod Mar 24 14:43:40.188: INFO: Waiting for pod pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0 to disappear Mar 24 14:43:40.192: INFO: Pod pod-7c0b2ae9-d215-4a52-994c-b86bf8bdc4d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:43:40.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2766" for this suite. Mar 24 14:43:46.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:43:46.303: INFO: namespace emptydir-2766 deletion completed in 6.108085261s • [SLOW TEST:10.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:43:46.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-2f6aea55-18b9-400a-b917-1d56548335ee [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:43:46.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3461" for this suite. Mar 24 14:43:52.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:43:52.447: INFO: namespace configmap-3461 deletion completed in 6.090982991s • [SLOW TEST:6.143 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 24 14:43:52.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1573 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1573 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1573 Mar 24 14:43:52.536: INFO: Found 0 stateful pods, waiting for 1 Mar 24 14:44:02.541: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 24 14:44:02.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 14:44:02.790: INFO: stderr: "I0324 14:44:02.664965 3681 log.go:172] (0xc0001166e0) (0xc0004ee6e0) Create stream\nI0324 14:44:02.665033 3681 log.go:172] (0xc0001166e0) (0xc0004ee6e0) Stream added, broadcasting: 1\nI0324 14:44:02.672401 3681 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0324 14:44:02.672706 3681 log.go:172] (0xc0001166e0) (0xc0008b4000) Create stream\nI0324 14:44:02.672742 3681 log.go:172] (0xc0001166e0) (0xc0008b4000) Stream added, broadcasting: 3\nI0324 14:44:02.674646 3681 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0324 14:44:02.674681 3681 log.go:172] (0xc0001166e0) (0xc0008b40a0) Create stream\nI0324 14:44:02.674693 3681 log.go:172] (0xc0001166e0) (0xc0008b40a0) Stream added, broadcasting: 5\nI0324 14:44:02.675631 3681 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0324 14:44:02.756920 3681 log.go:172] (0xc0001166e0) Data frame received for 5\nI0324 14:44:02.756947 3681 log.go:172] (0xc0008b40a0) (5) Data frame handling\nI0324 14:44:02.756968 3681 log.go:172] (0xc0008b40a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 14:44:02.782278 3681 log.go:172] (0xc0001166e0) Data frame received for 3\nI0324 14:44:02.782325 3681 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0324 14:44:02.782366 3681 log.go:172] (0xc0008b4000) (3) Data frame sent\nI0324 14:44:02.782455 3681 log.go:172] (0xc0001166e0) Data frame received for 3\nI0324 14:44:02.782477 3681 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0324 14:44:02.782775 3681 log.go:172] (0xc0001166e0) Data frame received for 5\nI0324 14:44:02.782806 3681 log.go:172] (0xc0008b40a0) (5) Data frame handling\nI0324 14:44:02.784625 3681 log.go:172] (0xc0001166e0) Data frame received for 1\nI0324 14:44:02.784646 3681 log.go:172] (0xc0004ee6e0) (1) Data frame handling\nI0324 14:44:02.784660 3681 log.go:172] (0xc0004ee6e0) (1) Data frame sent\nI0324 14:44:02.784679 3681 log.go:172] (0xc0001166e0) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0324 14:44:02.784704 3681 log.go:172] (0xc0001166e0) Go away received\nI0324 14:44:02.785293 3681 log.go:172] (0xc0001166e0) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0324 14:44:02.785315 3681 log.go:172] (0xc0001166e0) (0xc0008b4000) Stream removed, broadcasting: 3\nI0324 14:44:02.785328 3681 log.go:172] (0xc0001166e0) (0xc0008b40a0) Stream removed, broadcasting: 5\n" Mar 24 14:44:02.790: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 14:44:02.790: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 14:44:02.794: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 24 14:44:12.799: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 14:44:12.799: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 14:44:12.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999057s Mar 24 14:44:13.817: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996999241s Mar 24 14:44:14.825: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992085788s Mar 24 14:44:15.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984064129s Mar 24 14:44:16.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979567998s Mar 24 14:44:17.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974718881s Mar 24 14:44:18.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970092217s Mar 24 14:44:19.866: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965419821s Mar 24 14:44:20.871: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.943633349s Mar 24 14:44:21.876: INFO: Verifying statefulset ss doesn't scale past 1 for another 938.639244ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1573 Mar 24 14:44:22.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 14:44:23.095: INFO: stderr: "I0324 14:44:23.007128 3702 log.go:172] (0xc000880420) (0xc0003f2780) Create stream\nI0324 14:44:23.007173 3702 log.go:172] (0xc000880420) (0xc0003f2780) Stream added, broadcasting: 1\nI0324 14:44:23.010357 3702 log.go:172] (0xc000880420) Reply frame received for 1\nI0324 14:44:23.010648 3702 log.go:172] (0xc000880420) (0xc00097a000) Create stream\nI0324 14:44:23.010744 3702 log.go:172] (0xc000880420) (0xc00097a000) Stream added, broadcasting: 3\nI0324 14:44:23.012557 3702 log.go:172] (0xc000880420) Reply frame received for 3\nI0324 14:44:23.012617 3702 log.go:172] (0xc000880420) (0xc00097a0a0) Create stream\nI0324 14:44:23.012640 3702 log.go:172] (0xc000880420) (0xc00097a0a0) Stream added, broadcasting: 5\nI0324 14:44:23.013973 3702 log.go:172] (0xc000880420) Reply frame received for 5\nI0324 14:44:23.088203 3702 log.go:172] (0xc000880420) Data frame received for 5\nI0324 14:44:23.088248 3702 log.go:172] (0xc00097a0a0) (5) Data frame handling\nI0324 14:44:23.088264 3702 log.go:172] (0xc00097a0a0) (5) Data frame sent\nI0324 14:44:23.088300 3702 log.go:172] (0xc000880420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 14:44:23.088327 3702 log.go:172] (0xc00097a0a0) (5) Data frame handling\nI0324 14:44:23.088380 3702 log.go:172] (0xc000880420) Data frame received for 3\nI0324 14:44:23.088405 3702 log.go:172] (0xc00097a000) (3) Data frame handling\nI0324 14:44:23.088446 3702 log.go:172] (0xc00097a000) (3) Data frame sent\nI0324 14:44:23.088478 3702 log.go:172] (0xc000880420) Data frame received for 3\nI0324 14:44:23.088514 3702 log.go:172] (0xc00097a000) (3) Data frame handling\nI0324 14:44:23.090605 3702 log.go:172] (0xc000880420) Data frame received for 1\nI0324 14:44:23.090633 3702 log.go:172] (0xc0003f2780) (1) Data frame handling\nI0324 14:44:23.090646 3702 log.go:172] (0xc0003f2780) (1) Data frame sent\nI0324 14:44:23.090662 3702 log.go:172] (0xc000880420) (0xc0003f2780) Stream removed, broadcasting: 1\nI0324 14:44:23.090679 3702 log.go:172] (0xc000880420) Go away received\nI0324 14:44:23.091071 3702 log.go:172] (0xc000880420) (0xc0003f2780) Stream removed, broadcasting: 1\nI0324 14:44:23.091093 3702 log.go:172] (0xc000880420) (0xc00097a000) Stream removed, broadcasting: 3\nI0324 14:44:23.091107 3702 log.go:172] (0xc000880420) (0xc00097a0a0) Stream removed, broadcasting: 5\n" Mar 24 14:44:23.095: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 14:44:23.095: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 14:44:23.099: INFO: Found 1 stateful pods, waiting for 3 Mar 24 14:44:33.104: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 14:44:33.105: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 14:44:33.105: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 24 14:44:33.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 14:44:33.348: INFO: stderr: "I0324 14:44:33.235617 3722 log.go:172] (0xc000a9e630) (0xc00018aa00) Create stream\nI0324 14:44:33.235665 3722 log.go:172] (0xc000a9e630) (0xc00018aa00) Stream added, broadcasting: 1\nI0324 14:44:33.239542 3722 log.go:172] (0xc000a9e630) Reply frame received for 1\nI0324 14:44:33.239602 3722 log.go:172] (0xc000a9e630) (0xc000268000) Create stream\nI0324 14:44:33.239630 3722 log.go:172] (0xc000a9e630) (0xc000268000) Stream added, broadcasting: 3\nI0324 14:44:33.240805 3722 log.go:172] (0xc000a9e630) Reply frame received for 3\nI0324 14:44:33.240842 3722 log.go:172] (0xc000a9e630) (0xc000268140) Create stream\nI0324 14:44:33.240855 3722 log.go:172] (0xc000a9e630) (0xc000268140) Stream added, broadcasting: 5\nI0324 14:44:33.242604 3722 log.go:172] (0xc000a9e630) Reply frame received for 5\nI0324 14:44:33.344340 3722 log.go:172] (0xc000a9e630) Data frame received for 3\nI0324 14:44:33.344398 3722 log.go:172] (0xc000268000) (3) Data frame handling\nI0324 14:44:33.344420 3722 log.go:172] (0xc000268000) (3) Data frame sent\nI0324 14:44:33.344441 3722 log.go:172] (0xc000a9e630) Data frame received for 3\nI0324 14:44:33.344454 3722 log.go:172] (0xc000268000) (3) Data frame handling\nI0324 14:44:33.344474 3722 log.go:172] (0xc000a9e630) Data frame received for 5\nI0324 14:44:33.344495 3722 log.go:172] (0xc000268140) (5) Data frame handling\nI0324 14:44:33.344506 3722 log.go:172] (0xc000268140) (5) Data frame sent\nI0324 14:44:33.344511 3722 log.go:172] (0xc000a9e630) Data frame received for 5\nI0324 14:44:33.344520 3722 log.go:172] (0xc000268140) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 14:44:33.345618 3722 log.go:172] (0xc000a9e630) Data frame received for 1\nI0324 14:44:33.345629 3722 log.go:172] (0xc00018aa00) (1) Data frame handling\nI0324 14:44:33.345638 3722 log.go:172] (0xc00018aa00) (1) Data frame sent\nI0324 14:44:33.345779 3722 log.go:172] (0xc000a9e630) (0xc00018aa00) Stream removed, broadcasting: 1\nI0324 14:44:33.345845 3722 log.go:172] (0xc000a9e630) Go away received\nI0324 14:44:33.346027 3722 log.go:172] (0xc000a9e630) (0xc00018aa00) Stream removed, broadcasting: 1\nI0324 14:44:33.346041 3722 log.go:172] (0xc000a9e630) (0xc000268000) Stream removed, broadcasting: 3\nI0324 14:44:33.346049 3722 log.go:172] (0xc000a9e630) (0xc000268140) Stream removed, broadcasting: 5\n" Mar 24 14:44:33.349: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 14:44:33.349: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 14:44:33.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 14:44:33.578: INFO: stderr: "I0324 14:44:33.474204 3743 log.go:172] (0xc000998630) (0xc000514dc0) Create stream\nI0324 14:44:33.474258 3743 log.go:172] (0xc000998630) (0xc000514dc0) Stream added, broadcasting: 1\nI0324 14:44:33.476497 3743 log.go:172] (0xc000998630) Reply frame received for 1\nI0324 14:44:33.476579 3743 log.go:172] (0xc000998630) (0xc000966000) Create stream\nI0324 14:44:33.476615 3743 log.go:172] (0xc000998630) (0xc000966000) Stream added, broadcasting: 3\nI0324 14:44:33.477941 3743 log.go:172] (0xc000998630) Reply frame received for 3\nI0324 14:44:33.477989 3743 log.go:172] (0xc000998630) (0xc00078c000) Create stream\nI0324 14:44:33.478023 3743 log.go:172] (0xc000998630) (0xc00078c000) Stream added, broadcasting: 5\nI0324 14:44:33.479167 3743 log.go:172] (0xc000998630) Reply frame received for 5\nI0324 14:44:33.540436 3743 log.go:172] (0xc000998630) Data frame received for 5\nI0324 14:44:33.540471 3743 log.go:172] (0xc00078c000) (5) Data frame handling\nI0324 14:44:33.540490 3743 log.go:172] (0xc00078c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 14:44:33.571505 3743 log.go:172] (0xc000998630) Data frame received for 3\nI0324 14:44:33.571518 3743 log.go:172] (0xc000966000) (3) Data frame handling\nI0324 14:44:33.571527 3743 log.go:172] (0xc000966000) (3) Data frame sent\nI0324 14:44:33.571533 3743 log.go:172] (0xc000998630) Data frame received for 3\nI0324 14:44:33.571538 3743 log.go:172] (0xc000966000) (3) Data frame handling\nI0324 14:44:33.571680 3743 log.go:172] (0xc000998630) Data frame received for 5\nI0324 14:44:33.571704 3743 log.go:172] (0xc00078c000) (5) Data frame handling\nI0324 14:44:33.573481 3743 log.go:172] (0xc000998630) Data frame received for 1\nI0324 14:44:33.573498 3743 log.go:172] (0xc000514dc0) (1) Data frame handling\nI0324 14:44:33.573507 3743 log.go:172] (0xc000514dc0) (1) Data frame sent\nI0324 14:44:33.573770 3743 log.go:172] (0xc000998630) (0xc000514dc0) Stream removed, broadcasting: 1\nI0324 14:44:33.573975 3743 log.go:172] (0xc000998630) Go away received\nI0324 14:44:33.574016 3743 log.go:172] (0xc000998630) (0xc000514dc0) Stream removed, broadcasting: 1\nI0324 14:44:33.574026 3743 log.go:172] (0xc000998630) (0xc000966000) Stream removed, broadcasting: 3\nI0324 14:44:33.574032 3743 log.go:172] (0xc000998630) (0xc00078c000) Stream removed, broadcasting: 5\n" Mar 24 14:44:33.578: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 14:44:33.578: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 14:44:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 24 14:44:33.819: INFO: stderr: "I0324 14:44:33.711275 3763 log.go:172] (0xc000116d10) (0xc00063a780) Create stream\nI0324 14:44:33.711339 3763 log.go:172] (0xc000116d10) (0xc00063a780) Stream added, broadcasting: 1\nI0324 14:44:33.713900 3763 log.go:172] (0xc000116d10) Reply frame received for 1\nI0324 14:44:33.713947 3763 log.go:172] (0xc000116d10) (0xc00095e000) Create stream\nI0324 14:44:33.713959 3763 log.go:172] (0xc000116d10) (0xc00095e000) Stream added, broadcasting: 3\nI0324 14:44:33.714905 3763 log.go:172] (0xc000116d10) Reply frame received for 3\nI0324 14:44:33.714948 3763 log.go:172] (0xc000116d10) (0xc00041a000) Create stream\nI0324 14:44:33.714963 3763 log.go:172] (0xc000116d10) (0xc00041a000) Stream added, broadcasting: 5\nI0324 14:44:33.715732 3763 log.go:172] (0xc000116d10) Reply frame received for 5\nI0324 14:44:33.772880 3763 log.go:172] (0xc000116d10) Data frame received for 5\nI0324 14:44:33.772909 3763 log.go:172] (0xc00041a000) (5) Data frame handling\nI0324 14:44:33.772930 3763 log.go:172] (0xc00041a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0324 14:44:33.813065 3763 log.go:172] (0xc000116d10) Data frame received for 5\nI0324 14:44:33.813105 3763 log.go:172] (0xc00041a000) (5) Data frame handling\nI0324 14:44:33.813319 3763 log.go:172] (0xc000116d10) Data frame received for 3\nI0324 14:44:33.813338 3763 log.go:172] (0xc00095e000) (3) Data frame handling\nI0324 14:44:33.813355 3763 log.go:172] (0xc00095e000) (3) Data frame sent\nI0324 14:44:33.813373 3763 log.go:172] (0xc000116d10) Data frame received for 3\nI0324 14:44:33.813381 3763 log.go:172] (0xc00095e000) (3) Data frame handling\nI0324 14:44:33.815154 3763 log.go:172] (0xc000116d10) Data frame received for 1\nI0324 14:44:33.815168 3763 log.go:172] (0xc00063a780) (1) Data frame handling\nI0324 14:44:33.815176 3763 log.go:172] (0xc00063a780) (1) Data frame sent\nI0324 14:44:33.815186 3763 log.go:172] (0xc000116d10) (0xc00063a780) Stream removed, broadcasting: 1\nI0324 14:44:33.815249 3763 log.go:172] (0xc000116d10) Go away received\nI0324 14:44:33.815440 3763 log.go:172] (0xc000116d10) (0xc00063a780) Stream removed, broadcasting: 1\nI0324 14:44:33.815456 3763 log.go:172] (0xc000116d10) (0xc00095e000) Stream removed, broadcasting: 3\nI0324 14:44:33.815463 3763 log.go:172] (0xc000116d10) (0xc00041a000) Stream removed, broadcasting: 5\n" Mar 24 14:44:33.820: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 24 14:44:33.820: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 24 14:44:33.820: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 14:44:33.822: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 24 14:44:43.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 14:44:43.831: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 24 14:44:43.831: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 24 14:44:43.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999365s Mar 24 14:44:44.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994659161s Mar 24 14:44:45.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990175759s Mar 24 14:44:46.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984477837s Mar 24 14:44:47.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979142958s Mar 24 14:44:48.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97438545s Mar 24 14:44:49.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969307592s Mar 24 14:44:50.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964094712s Mar 24 14:44:51.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958702815s Mar 24 14:44:52.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.19825ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1573 Mar 24 14:44:53.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 14:44:56.592: INFO: stderr: "I0324 14:44:56.493921 3783 log.go:172] (0xc00013ce70) (0xc0006688c0) Create stream\nI0324 14:44:56.493959 3783 log.go:172] (0xc00013ce70) (0xc0006688c0) Stream added, broadcasting: 1\nI0324 14:44:56.496634 3783 log.go:172] (0xc00013ce70) Reply frame received for 1\nI0324 14:44:56.496677 3783 log.go:172] (0xc00013ce70) (0xc000748000) Create stream\nI0324 14:44:56.496691 3783 log.go:172] (0xc00013ce70) (0xc000748000) Stream added, broadcasting: 3\nI0324 14:44:56.497718 3783 log.go:172] (0xc00013ce70) Reply frame received for 3\nI0324 14:44:56.497748 3783 log.go:172] (0xc00013ce70) (0xc000668960) Create stream\nI0324 14:44:56.497756 3783 log.go:172] (0xc00013ce70) (0xc000668960) Stream added, broadcasting: 5\nI0324 14:44:56.498484 3783 log.go:172] (0xc00013ce70) Reply frame received for 5\nI0324 14:44:56.584669 3783 log.go:172] (0xc00013ce70) Data frame received for 3\nI0324 14:44:56.584694 3783 log.go:172] (0xc000748000) (3) Data frame handling\nI0324 14:44:56.584701 3783 log.go:172] (0xc000748000) (3) Data frame sent\nI0324 14:44:56.584706 3783 log.go:172] (0xc00013ce70) Data frame received for 3\nI0324 14:44:56.584710 3783 log.go:172] (0xc000748000) (3) Data frame handling\nI0324 14:44:56.584734 3783 log.go:172] (0xc00013ce70) Data frame received for 5\nI0324 14:44:56.584743 3783 log.go:172] (0xc000668960) (5) Data frame handling\nI0324 14:44:56.584749 3783 log.go:172] (0xc000668960) (5) Data frame sent\nI0324 14:44:56.584754 3783 log.go:172] (0xc00013ce70) Data frame received for 5\nI0324 14:44:56.584758 3783 log.go:172] (0xc000668960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 14:44:56.586613 3783 log.go:172] (0xc00013ce70) Data frame received for 1\nI0324 14:44:56.586658 3783 log.go:172] (0xc0006688c0) (1) Data frame handling\nI0324 14:44:56.586682 3783 log.go:172] (0xc0006688c0) (1) Data frame sent\nI0324 14:44:56.586708 3783 log.go:172] (0xc00013ce70) (0xc0006688c0) Stream removed, broadcasting: 1\nI0324 14:44:56.586856 3783 log.go:172] (0xc00013ce70) Go away received\nI0324 14:44:56.587215 3783 log.go:172] (0xc00013ce70) (0xc0006688c0) Stream removed, broadcasting: 1\nI0324 14:44:56.587251 3783 log.go:172] (0xc00013ce70) (0xc000748000) Stream removed, broadcasting: 3\nI0324 14:44:56.587276 3783 log.go:172] (0xc00013ce70) (0xc000668960) Stream removed, broadcasting: 5\n" Mar 24 14:44:56.593: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 14:44:56.593: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 14:44:56.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 14:44:56.777: INFO: stderr: "I0324 14:44:56.704836 3814 log.go:172] (0xc00013e8f0) (0xc00088d720) Create stream\nI0324 14:44:56.704884 3814 log.go:172] (0xc00013e8f0) (0xc00088d720) Stream added, broadcasting: 1\nI0324 14:44:56.707270 3814 log.go:172] (0xc00013e8f0) Reply frame received for 1\nI0324 14:44:56.707317 3814 log.go:172] (0xc00013e8f0) (0xc00051bc20) Create stream\nI0324 14:44:56.707330 3814 log.go:172] (0xc00013e8f0) (0xc00051bc20) Stream added, broadcasting: 3\nI0324 14:44:56.708279 3814 log.go:172] (0xc00013e8f0) Reply frame received for 3\nI0324 14:44:56.708317 3814 log.go:172] (0xc00013e8f0) (0xc00088d7c0) Create stream\nI0324 14:44:56.708327 3814 log.go:172] (0xc00013e8f0) (0xc00088d7c0) Stream added, broadcasting: 5\nI0324 14:44:56.709078 3814 log.go:172] (0xc00013e8f0) Reply frame received for 5\nI0324 14:44:56.765466 3814 log.go:172] (0xc00013e8f0) Data frame received for 5\nI0324 14:44:56.765528 3814 log.go:172] (0xc00088d7c0) (5) Data frame handling\nI0324 14:44:56.765550 3814 log.go:172] (0xc00088d7c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 14:44:56.765581 3814 log.go:172] (0xc00013e8f0) Data frame received for 3\nI0324 14:44:56.765606 3814 log.go:172] (0xc00051bc20) (3) Data frame handling\nI0324 14:44:56.765636 3814 log.go:172] (0xc00051bc20) (3) Data frame sent\nI0324 14:44:56.765651 3814 log.go:172] (0xc00013e8f0) Data frame received for 3\nI0324 14:44:56.765665 3814 log.go:172] (0xc00051bc20) (3) Data frame handling\nI0324 14:44:56.765714 3814 log.go:172] (0xc00013e8f0) Data frame received for 5\nI0324 14:44:56.765745 3814 log.go:172] (0xc00088d7c0) (5) Data frame handling\nI0324 14:44:56.771726 3814 log.go:172] (0xc00013e8f0) Data frame received for 1\nI0324 14:44:56.771748 3814 log.go:172] (0xc00088d720) (1) Data frame handling\nI0324 14:44:56.771767 3814 log.go:172] (0xc00088d720) (1) Data frame sent\nI0324 14:44:56.771780 3814 log.go:172] (0xc00013e8f0) (0xc00088d720) Stream removed, broadcasting: 1\nI0324 14:44:56.772275 3814 log.go:172] (0xc00013e8f0) (0xc00088d720) Stream removed, broadcasting: 1\nI0324 14:44:56.772306 3814 log.go:172] (0xc00013e8f0) (0xc00051bc20) Stream removed, broadcasting: 3\nI0324 14:44:56.772507 3814 log.go:172] (0xc00013e8f0) (0xc00088d7c0) Stream removed, broadcasting: 5\n" Mar 24 14:44:56.777: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 14:44:56.777: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 14:44:56.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1573 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 24 14:44:56.964: INFO: stderr: "I0324 14:44:56.897845 3834 log.go:172] (0xc000962370) (0xc000900640) Create stream\nI0324 14:44:56.897925 3834 log.go:172] (0xc000962370) (0xc000900640) Stream added, broadcasting: 1\nI0324 14:44:56.900387 3834 log.go:172] (0xc000962370) Reply frame received for 1\nI0324 14:44:56.900436 3834 log.go:172] (0xc000962370) (0xc0009006e0) Create stream\nI0324 14:44:56.900451 3834 log.go:172] (0xc000962370) (0xc0009006e0) Stream added, broadcasting: 3\nI0324 14:44:56.901606 3834 log.go:172] (0xc000962370) Reply frame received for 3\nI0324 14:44:56.901653 3834 log.go:172] (0xc000962370) (0xc000858000) Create stream\nI0324 14:44:56.901670 3834 log.go:172] (0xc000962370) (0xc000858000) Stream added, broadcasting: 5\nI0324 14:44:56.902611 3834 log.go:172] (0xc000962370) Reply frame received for 5\nI0324 14:44:56.957294 3834 log.go:172] (0xc000962370) Data frame received for 5\nI0324 14:44:56.957318 3834 log.go:172] (0xc000858000) (5) Data frame handling\nI0324 14:44:56.957327 3834 log.go:172] (0xc000858000) (5) Data frame sent\nI0324 14:44:56.957333 3834 log.go:172] (0xc000962370) Data frame received for 5\nI0324 14:44:56.957339 3834 log.go:172] (0xc000858000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0324 14:44:56.957397 3834 log.go:172] (0xc000962370) Data frame received for 3\nI0324 14:44:56.957423 3834 log.go:172] (0xc0009006e0) (3) Data frame handling\nI0324 14:44:56.957445 3834 log.go:172] (0xc0009006e0) (3) Data frame sent\nI0324 14:44:56.957459 3834 log.go:172] (0xc000962370) Data frame received for 3\nI0324 14:44:56.957470 3834 log.go:172] (0xc0009006e0) (3) Data frame handling\nI0324 14:44:56.959075 3834 log.go:172] (0xc000962370) Data frame received for 1\nI0324 14:44:56.959100 3834 log.go:172] (0xc000900640) (1) Data frame handling\nI0324 14:44:56.959115 3834 log.go:172] (0xc000900640) (1) Data frame sent\nI0324 14:44:56.959150 3834 log.go:172] (0xc000962370) (0xc000900640) Stream removed, broadcasting: 1\nI0324 14:44:56.959179 3834 log.go:172] (0xc000962370) Go away received\nI0324 14:44:56.959589 3834 log.go:172] (0xc000962370) (0xc000900640) Stream removed, broadcasting: 1\nI0324 14:44:56.959612 3834 log.go:172] (0xc000962370) (0xc0009006e0) Stream removed, broadcasting: 3\nI0324 14:44:56.959629 3834 log.go:172] (0xc000962370) (0xc000858000) Stream removed, broadcasting: 5\n" Mar 24 14:44:56.964: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 24 14:44:56.964: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 24 14:44:56.964: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 24 14:45:16.980: INFO: Deleting all statefulset in ns statefulset-1573 Mar 24 14:45:16.984: INFO: Scaling statefulset ss to 0 Mar 24 14:45:16.993: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 14:45:16.995: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 24 14:45:17.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1573" for this suite. Mar 24 14:45:23.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 24 14:45:23.111: INFO: namespace statefulset-1573 deletion completed in 6.103020439s • [SLOW TEST:90.664 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 24 14:45:23.112: INFO: Running AfterSuite actions on all nodes Mar 24 14:45:23.112: INFO: Running AfterSuite actions on node 1 Mar 24 14:45:23.112: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 Ran 215 of 4412 Specs in 6573.926 seconds FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped --- FAIL: TestE2E (6574.13s) FAIL