I0415 12:55:44.229145 6 e2e.go:243] Starting e2e run "f992be92-f95b-4ac4-a0e2-2e77f59696c5" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586955343 - Will randomize all specs Will run 215 of 4412 specs Apr 15 12:55:44.421: INFO: >>> kubeConfig: /root/.kube/config Apr 15 12:55:44.424: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 15 12:55:44.442: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 15 12:55:44.469: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 15 12:55:44.469: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 15 12:55:44.469: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 15 12:55:44.489: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 15 12:55:44.489: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 15 12:55:44.489: INFO: e2e test version: v1.15.11 Apr 15 12:55:44.490: INFO: kube-apiserver version: v1.15.7 SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:55:44.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Apr 15 12:55:44.536: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-24fl STEP: Creating a pod to test atomic-volume-subpath Apr 15 12:55:44.559: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-24fl" in namespace "subpath-2338" to be "success or failure" Apr 15 12:55:44.563: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807268ms Apr 15 12:55:46.567: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008284217s Apr 15 12:55:48.571: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 4.011967742s Apr 15 12:55:50.575: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 6.016169971s Apr 15 12:55:52.579: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 8.020330216s Apr 15 12:55:54.583: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 10.024542957s Apr 15 12:55:56.587: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 12.028310401s Apr 15 12:55:58.591: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 14.032400191s Apr 15 12:56:00.595: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 16.035639577s Apr 15 12:56:02.599: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 18.039930362s Apr 15 12:56:04.603: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 20.044309944s Apr 15 12:56:06.606: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Running", Reason="", readiness=true. Elapsed: 22.047490738s Apr 15 12:56:08.610: INFO: Pod "pod-subpath-test-configmap-24fl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051607931s STEP: Saw pod success Apr 15 12:56:08.611: INFO: Pod "pod-subpath-test-configmap-24fl" satisfied condition "success or failure" Apr 15 12:56:08.614: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-24fl container test-container-subpath-configmap-24fl: STEP: delete the pod Apr 15 12:56:08.637: INFO: Waiting for pod pod-subpath-test-configmap-24fl to disappear Apr 15 12:56:08.641: INFO: Pod pod-subpath-test-configmap-24fl no longer exists STEP: Deleting pod pod-subpath-test-configmap-24fl Apr 15 12:56:08.641: INFO: Deleting pod "pod-subpath-test-configmap-24fl" in namespace "subpath-2338" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:56:08.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2338" for this suite. Apr 15 12:56:14.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:56:14.761: INFO: namespace subpath-2338 deletion completed in 6.094293942s • [SLOW TEST:30.271 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:56:14.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 15 12:56:18.859: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 15 12:56:33.943: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:56:33.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1603" for this suite. Apr 15 12:56:39.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:56:40.067: INFO: namespace pods-1603 deletion completed in 6.117126327s • [SLOW TEST:25.305 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:56:40.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6005 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6005 STEP: Creating statefulset with conflicting port in namespace statefulset-6005 STEP: Waiting until pod test-pod will start running in namespace statefulset-6005 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6005 Apr 15 12:56:44.276: INFO: Observed stateful pod in namespace: statefulset-6005, name: ss-0, uid: 3cc5283a-87ea-43ad-9ee0-4582d22bba8b, status phase: Pending. Waiting for statefulset controller to delete. Apr 15 12:56:44.418: INFO: Observed stateful pod in namespace: statefulset-6005, name: ss-0, uid: 3cc5283a-87ea-43ad-9ee0-4582d22bba8b, status phase: Failed. Waiting for statefulset controller to delete. Apr 15 12:56:44.425: INFO: Observed stateful pod in namespace: statefulset-6005, name: ss-0, uid: 3cc5283a-87ea-43ad-9ee0-4582d22bba8b, status phase: Failed. Waiting for statefulset controller to delete. Apr 15 12:56:44.451: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6005 STEP: Removing pod with conflicting port in namespace statefulset-6005 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6005 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 15 12:56:48.542: INFO: Deleting all statefulset in ns statefulset-6005 Apr 15 12:56:48.545: INFO: Scaling statefulset ss to 0 Apr 15 12:56:58.564: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 12:56:58.567: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:56:58.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6005" for this suite. Apr 15 12:57:04.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:57:04.716: INFO: namespace statefulset-6005 deletion completed in 6.132954393s • [SLOW TEST:24.649 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:57:04.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 12:57:10.832: INFO: DNS probes using dns-test-69c374bf-09e0-4bcb-80d4-5c34b8388eb5 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 12:57:16.936: INFO: File wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:16.944: INFO: File jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:16.944: INFO: Lookups using dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 failed for: [wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local] Apr 15 12:57:21.949: INFO: File wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:21.952: INFO: File jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:21.952: INFO: Lookups using dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 failed for: [wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local] Apr 15 12:57:26.949: INFO: File wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:26.953: INFO: File jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:26.953: INFO: Lookups using dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 failed for: [wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local] Apr 15 12:57:31.948: INFO: File wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:31.952: INFO: File jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:31.952: INFO: Lookups using dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 failed for: [wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local] Apr 15 12:57:36.949: INFO: File wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:36.953: INFO: File jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local from pod dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 15 12:57:36.953: INFO: Lookups using dns-3709/dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 failed for: [wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local] Apr 15 12:57:41.952: INFO: DNS probes using dns-test-10a263bf-8225-4a54-a7ea-aaad2f05ec79 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3709.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3709.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 12:57:48.588: INFO: DNS probes using dns-test-e147f282-3b83-4df7-bfd0-c329f0b22f2f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:57:48.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3709" for this suite. Apr 15 12:57:54.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:57:54.787: INFO: namespace dns-3709 deletion completed in 6.08045295s • [SLOW TEST:50.070 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:57:54.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 12:57:54.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6110' Apr 15 12:57:56.914: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 15 12:57:56.914: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 15 12:57:58.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6110' Apr 15 12:57:59.085: INFO: stderr: "" Apr 15 12:57:59.085: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:57:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6110" for this suite. Apr 15 12:59:47.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:59:47.191: INFO: namespace kubectl-6110 deletion completed in 1m48.10367473s • [SLOW TEST:112.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:59:47.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 15 12:59:53.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-19adbec7-85e3-4a7f-b559-c17beabb0ee1 -c busybox-main-container --namespace=emptydir-3087 -- cat /usr/share/volumeshare/shareddata.txt' Apr 15 12:59:53.533: INFO: stderr: "I0415 12:59:53.429838 109 log.go:172] (0xc00012ae70) (0xc000940a00) Create stream\nI0415 12:59:53.429892 109 log.go:172] (0xc00012ae70) (0xc000940a00) Stream added, broadcasting: 1\nI0415 12:59:53.432871 109 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0415 12:59:53.432904 109 log.go:172] (0xc00012ae70) (0xc000940aa0) Create stream\nI0415 12:59:53.432914 109 log.go:172] (0xc00012ae70) (0xc000940aa0) Stream added, broadcasting: 3\nI0415 12:59:53.434162 109 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0415 12:59:53.434205 109 log.go:172] (0xc00012ae70) (0xc000940b40) Create stream\nI0415 12:59:53.434224 109 log.go:172] (0xc00012ae70) (0xc000940b40) Stream added, broadcasting: 5\nI0415 12:59:53.435317 109 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0415 12:59:53.524669 109 log.go:172] (0xc00012ae70) Data frame received for 5\nI0415 12:59:53.524712 109 log.go:172] (0xc000940b40) (5) Data frame handling\nI0415 12:59:53.524759 109 log.go:172] (0xc00012ae70) Data frame received for 3\nI0415 12:59:53.524791 109 log.go:172] (0xc000940aa0) (3) Data frame handling\nI0415 12:59:53.524843 109 log.go:172] (0xc000940aa0) (3) Data frame sent\nI0415 12:59:53.524870 109 log.go:172] (0xc00012ae70) Data frame received for 3\nI0415 12:59:53.524889 109 log.go:172] (0xc000940aa0) (3) Data frame handling\nI0415 12:59:53.527308 109 log.go:172] (0xc00012ae70) Data frame received for 1\nI0415 12:59:53.527342 109 log.go:172] (0xc000940a00) (1) Data frame handling\nI0415 12:59:53.527361 109 log.go:172] (0xc000940a00) (1) Data frame sent\nI0415 12:59:53.527392 109 log.go:172] (0xc00012ae70) (0xc000940a00) Stream removed, broadcasting: 1\nI0415 12:59:53.527417 109 log.go:172] (0xc00012ae70) Go away received\nI0415 12:59:53.527889 109 log.go:172] (0xc00012ae70) (0xc000940a00) Stream removed, broadcasting: 1\nI0415 12:59:53.527913 109 log.go:172] (0xc00012ae70) (0xc000940aa0) Stream removed, broadcasting: 3\nI0415 12:59:53.527925 109 log.go:172] (0xc00012ae70) (0xc000940b40) Stream removed, broadcasting: 5\n" Apr 15 12:59:53.533: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 12:59:53.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3087" for this suite. Apr 15 12:59:59.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 12:59:59.698: INFO: namespace emptydir-3087 deletion completed in 6.160781104s • [SLOW TEST:12.506 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 12:59:59.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-12003aee-df1c-486f-a52c-e897ca02ea0d STEP: Creating a pod to test consume secrets Apr 15 12:59:59.758: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817" in namespace "projected-5180" to be "success or failure" Apr 15 12:59:59.761: INFO: Pod "pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817": Phase="Pending", Reason="", readiness=false. Elapsed: 3.544052ms Apr 15 13:00:01.765: INFO: Pod "pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007440468s Apr 15 13:00:03.770: INFO: Pod "pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011870453s STEP: Saw pod success Apr 15 13:00:03.770: INFO: Pod "pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817" satisfied condition "success or failure" Apr 15 13:00:03.773: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817 container projected-secret-volume-test: STEP: delete the pod Apr 15 13:00:03.833: INFO: Waiting for pod pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817 to disappear Apr 15 13:00:03.849: INFO: Pod pod-projected-secrets-514a0d04-2da0-4742-b665-b6563fcdf817 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:00:03.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5180" for this suite. Apr 15 13:00:09.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:00:09.940: INFO: namespace projected-5180 deletion completed in 6.088582561s • [SLOW TEST:10.242 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:00:09.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5c5ef700-cea9-402f-82f3-5f029058e62e STEP: Creating a pod to test consume configMaps Apr 15 13:00:10.031: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397" in namespace "projected-422" to be "success or failure" Apr 15 13:00:10.035: INFO: Pod "pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419164ms Apr 15 13:00:12.039: INFO: Pod "pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00852684s Apr 15 13:00:14.043: INFO: Pod "pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011881844s STEP: Saw pod success Apr 15 13:00:14.043: INFO: Pod "pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397" satisfied condition "success or failure" Apr 15 13:00:14.045: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397 container projected-configmap-volume-test: STEP: delete the pod Apr 15 13:00:14.065: INFO: Waiting for pod pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397 to disappear Apr 15 13:00:14.080: INFO: Pod pod-projected-configmaps-bb0f2465-1676-4a9e-9fe9-5f692542e397 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:00:14.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-422" for this suite. Apr 15 13:00:20.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:00:20.196: INFO: namespace projected-422 deletion completed in 6.112495189s • [SLOW TEST:10.256 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:00:20.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:00:20.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f" in namespace "downward-api-4633" to be "success or failure" Apr 15 13:00:20.258: INFO: Pod "downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.969344ms Apr 15 13:00:22.263: INFO: Pod "downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008445541s Apr 15 13:00:24.267: INFO: Pod "downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012334464s STEP: Saw pod success Apr 15 13:00:24.267: INFO: Pod "downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f" satisfied condition "success or failure" Apr 15 13:00:24.269: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f container client-container: STEP: delete the pod Apr 15 13:00:24.302: INFO: Waiting for pod downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f to disappear Apr 15 13:00:24.400: INFO: Pod downwardapi-volume-7ae958d7-d126-49a7-9cd7-c01589be882f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:00:24.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4633" for this suite. Apr 15 13:00:30.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:00:30.531: INFO: namespace downward-api-4633 deletion completed in 6.126876942s • [SLOW TEST:10.334 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:00:30.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 15 13:00:38.679: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:38.730: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:40.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:40.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:42.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:42.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:44.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:44.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:46.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:46.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:48.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:48.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:50.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:50.734: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:52.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:52.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:54.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:54.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:56.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:56.735: INFO: Pod pod-with-poststart-exec-hook still exists Apr 15 13:00:58.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 15 13:00:58.734: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:00:58.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1230" for this suite. Apr 15 13:01:20.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:01:20.830: INFO: namespace container-lifecycle-hook-1230 deletion completed in 22.093020974s • [SLOW TEST:50.299 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:01:20.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:01:20.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5" in namespace "downward-api-2466" to be "success or failure" Apr 15 13:01:20.905: INFO: Pod "downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.07572ms Apr 15 13:01:22.929: INFO: Pod "downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038347443s Apr 15 13:01:24.932: INFO: Pod "downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042267312s STEP: Saw pod success Apr 15 13:01:24.933: INFO: Pod "downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5" satisfied condition "success or failure" Apr 15 13:01:24.935: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5 container client-container: STEP: delete the pod Apr 15 13:01:24.996: INFO: Waiting for pod downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5 to disappear Apr 15 13:01:25.008: INFO: Pod downwardapi-volume-a368dd72-3c7e-42f8-8bc7-90f05631a2e5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:01:25.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2466" for this suite. Apr 15 13:01:31.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:01:31.149: INFO: namespace downward-api-2466 deletion completed in 6.138376643s • [SLOW TEST:10.319 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:01:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 13:01:34.344: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:01:34.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7767" for this suite. Apr 15 13:01:40.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:01:40.569: INFO: namespace container-runtime-7767 deletion completed in 6.13555933s • [SLOW TEST:9.419 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:01:40.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:01:44.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2509" for this suite. Apr 15 13:02:24.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:02:24.785: INFO: namespace kubelet-test-2509 deletion completed in 40.118912387s • [SLOW TEST:44.214 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:02:24.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:02:24.921: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 15 13:02:24.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:24.944: INFO: Number of nodes with available pods: 0 Apr 15 13:02:24.944: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:25.949: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:25.952: INFO: Number of nodes with available pods: 0 Apr 15 13:02:25.952: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:26.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:26.972: INFO: Number of nodes with available pods: 0 Apr 15 13:02:26.972: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:27.960: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:27.963: INFO: Number of nodes with available pods: 0 Apr 15 13:02:27.963: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:28.949: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:28.951: INFO: Number of nodes with available pods: 2 Apr 15 13:02:28.951: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 15 13:02:28.989: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:28.989: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:29.014: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:30.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:30.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:30.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:31.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:31.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:31.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:32.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:32.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:32.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:33.018: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:33.018: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:33.018: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:33.021: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:34.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:34.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:34.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:34.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:35.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:35.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:35.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:35.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:36.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:36.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:36.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:36.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:37.018: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:37.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:37.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:37.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:38.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:38.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:38.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:38.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:39.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:39.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:39.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:39.026: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:40.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:40.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:40.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:40.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:41.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:41.019: INFO: Wrong image for pod: daemon-set-tkzbf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:41.019: INFO: Pod daemon-set-tkzbf is not available Apr 15 13:02:41.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:42.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:42.019: INFO: Pod daemon-set-vjhtk is not available Apr 15 13:02:42.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:43.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:43.019: INFO: Pod daemon-set-vjhtk is not available Apr 15 13:02:43.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:44.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:44.019: INFO: Pod daemon-set-vjhtk is not available Apr 15 13:02:44.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:45.098: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:45.104: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:46.020: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:46.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:47.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:47.019: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:47.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:48.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:48.019: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:48.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:49.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:49.019: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:49.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:50.018: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:50.018: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:50.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:51.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:51.019: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:51.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:52.019: INFO: Wrong image for pod: daemon-set-ndwjk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 15 13:02:52.019: INFO: Pod daemon-set-ndwjk is not available Apr 15 13:02:52.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:53.018: INFO: Pod daemon-set-dml9s is not available Apr 15 13:02:53.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 15 13:02:53.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:53.028: INFO: Number of nodes with available pods: 1 Apr 15 13:02:53.028: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:54.033: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:54.036: INFO: Number of nodes with available pods: 1 Apr 15 13:02:54.036: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:55.044: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:55.048: INFO: Number of nodes with available pods: 1 Apr 15 13:02:55.048: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:02:56.034: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:02:56.038: INFO: Number of nodes with available pods: 2 Apr 15 13:02:56.038: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-461, will wait for the garbage collector to delete the pods Apr 15 13:02:56.109: INFO: Deleting DaemonSet.extensions daemon-set took: 6.380042ms Apr 15 13:02:56.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.224835ms Apr 15 13:03:02.212: INFO: Number of nodes with available pods: 0 Apr 15 13:03:02.212: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 13:03:02.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-461/daemonsets","resourceVersion":"5558184"},"items":null} Apr 15 13:03:02.217: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-461/pods","resourceVersion":"5558184"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:02.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-461" for this suite. Apr 15 13:03:08.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:08.335: INFO: namespace daemonsets-461 deletion completed in 6.106666032s • [SLOW TEST:43.550 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:08.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-56037e23-733b-4eee-849f-4fea463674a4 STEP: Creating a pod to test consume secrets Apr 15 13:03:08.404: INFO: Waiting up to 5m0s for pod "pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91" in namespace "secrets-5778" to be "success or failure" Apr 15 13:03:08.408: INFO: Pod "pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059936ms Apr 15 13:03:10.412: INFO: Pod "pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008343472s Apr 15 13:03:12.417: INFO: Pod "pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01237946s STEP: Saw pod success Apr 15 13:03:12.417: INFO: Pod "pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91" satisfied condition "success or failure" Apr 15 13:03:12.420: INFO: Trying to get logs from node iruya-worker pod pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91 container secret-volume-test: STEP: delete the pod Apr 15 13:03:12.475: INFO: Waiting for pod pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91 to disappear Apr 15 13:03:12.480: INFO: Pod pod-secrets-168ba13f-dfbc-4ca1-8b8b-5da42bbeab91 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:12.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5778" for this suite. Apr 15 13:03:18.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:18.587: INFO: namespace secrets-5778 deletion completed in 6.102891148s • [SLOW TEST:10.251 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:18.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 15 13:03:18.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 15 13:03:18.664: INFO: Waiting for terminating namespaces to be deleted... Apr 15 13:03:18.667: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 15 13:03:18.671: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.671: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 13:03:18.671: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.671: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 13:03:18.671: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 15 13:03:18.677: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.677: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 13:03:18.677: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.677: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 13:03:18.677: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.677: INFO: Container coredns ready: true, restart count 0 Apr 15 13:03:18.677: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 15 13:03:18.677: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d70e4d2a-0103-466b-9329-bef9bc7df1c2 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d70e4d2a-0103-466b-9329-bef9bc7df1c2 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d70e4d2a-0103-466b-9329-bef9bc7df1c2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:26.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1914" for this suite. Apr 15 13:03:34.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:34.959: INFO: namespace sched-pred-1914 deletion completed in 8.096377306s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:16.372 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:34.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 13:03:39.086: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:39.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1895" for this suite. Apr 15 13:03:45.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:45.255: INFO: namespace container-runtime-1895 deletion completed in 6.102526678s • [SLOW TEST:10.296 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:45.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-e6977521-2c0e-44f6-85b0-2547ce848399 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:45.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5686" for this suite. Apr 15 13:03:51.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:51.426: INFO: namespace configmap-5686 deletion completed in 6.103390315s • [SLOW TEST:6.171 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:51.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 15 13:03:51.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 15 13:03:51.681: INFO: stderr: "" Apr 15 13:03:51.681: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:51.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2252" for this suite. Apr 15 13:03:57.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:03:57.832: INFO: namespace kubectl-2252 deletion completed in 6.146181713s • [SLOW TEST:6.405 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:03:57.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0415 13:03:58.943599 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 13:03:58.943: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:03:58.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1168" for this suite. Apr 15 13:04:05.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:04:05.099: INFO: namespace gc-1168 deletion completed in 6.152580075s • [SLOW TEST:7.267 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:04:05.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 15 13:04:05.223: INFO: Waiting up to 5m0s for pod "downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec" in namespace "downward-api-5914" to be "success or failure" Apr 15 13:04:05.228: INFO: Pod "downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847753ms Apr 15 13:04:07.247: INFO: Pod "downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024527597s Apr 15 13:04:09.251: INFO: Pod "downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028151319s STEP: Saw pod success Apr 15 13:04:09.251: INFO: Pod "downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec" satisfied condition "success or failure" Apr 15 13:04:09.254: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec container dapi-container: STEP: delete the pod Apr 15 13:04:09.271: INFO: Waiting for pod downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec to disappear Apr 15 13:04:09.291: INFO: Pod downward-api-b99dea09-67f2-481b-a488-de89ce0f79ec no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:04:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5914" for this suite. Apr 15 13:04:15.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:04:15.385: INFO: namespace downward-api-5914 deletion completed in 6.090439496s • [SLOW TEST:10.286 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:04:15.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 13:04:15.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1895' Apr 15 13:04:15.620: INFO: stderr: "" Apr 15 13:04:15.621: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 15 13:04:20.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1895 -o json' Apr 15 13:04:20.767: INFO: stderr: "" Apr 15 13:04:20.767: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-15T13:04:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1895\",\n \"resourceVersion\": \"5558544\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1895/pods/e2e-test-nginx-pod\",\n \"uid\": \"53b19aad-9a26-4ff2-a2b7-bde027a2f11e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-928mr\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-928mr\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-928mr\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-15T13:04:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-15T13:04:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-15T13:04:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-15T13:04:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://09ea9186c9f05eb909b480a2a3f9b90d18efcefdb3c75b10a96c825fbe3f7b57\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-15T13:04:17Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.250\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-15T13:04:15Z\"\n }\n}\n" STEP: replace the image in the pod Apr 15 13:04:20.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1895' Apr 15 13:04:21.090: INFO: stderr: "" Apr 15 13:04:21.090: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 15 13:04:21.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1895' Apr 15 13:04:24.460: INFO: stderr: "" Apr 15 13:04:24.460: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:04:24.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1895" for this suite. Apr 15 13:04:30.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:04:30.578: INFO: namespace kubectl-1895 deletion completed in 6.101104024s • [SLOW TEST:15.192 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:04:30.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 15 13:04:38.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:38.704: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:40.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:40.708: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:42.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:42.708: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:44.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:44.708: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:46.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:46.708: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:48.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:48.709: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:50.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:50.707: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 13:04:52.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 13:04:52.728: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:04:52.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8323" for this suite. Apr 15 13:05:14.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:05:14.842: INFO: namespace container-lifecycle-hook-8323 deletion completed in 22.106207245s • [SLOW TEST:44.264 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:05:14.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 15 13:05:22.972: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:23.014: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:25.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:25.033: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:27.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:27.018: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:29.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:29.019: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:31.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:31.019: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:33.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:33.019: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:35.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:35.018: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:37.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:37.018: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:39.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:39.035: INFO: Pod pod-with-prestop-exec-hook still exists Apr 15 13:05:41.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 15 13:05:41.019: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:05:41.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3896" for this suite. Apr 15 13:06:03.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:06:03.121: INFO: namespace container-lifecycle-hook-3896 deletion completed in 22.091972s • [SLOW TEST:48.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:06:03.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 15 13:06:03.209: INFO: Waiting up to 5m0s for pod "client-containers-65f56123-52ec-45e4-939e-67b09e09176d" in namespace "containers-2421" to be "success or failure" Apr 15 13:06:03.212: INFO: Pod "client-containers-65f56123-52ec-45e4-939e-67b09e09176d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989731ms Apr 15 13:06:05.217: INFO: Pod "client-containers-65f56123-52ec-45e4-939e-67b09e09176d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007642404s Apr 15 13:06:07.222: INFO: Pod "client-containers-65f56123-52ec-45e4-939e-67b09e09176d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012822012s STEP: Saw pod success Apr 15 13:06:07.222: INFO: Pod "client-containers-65f56123-52ec-45e4-939e-67b09e09176d" satisfied condition "success or failure" Apr 15 13:06:07.224: INFO: Trying to get logs from node iruya-worker pod client-containers-65f56123-52ec-45e4-939e-67b09e09176d container test-container: STEP: delete the pod Apr 15 13:06:07.239: INFO: Waiting for pod client-containers-65f56123-52ec-45e4-939e-67b09e09176d to disappear Apr 15 13:06:07.243: INFO: Pod client-containers-65f56123-52ec-45e4-939e-67b09e09176d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:06:07.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2421" for this suite. Apr 15 13:06:13.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:06:13.354: INFO: namespace containers-2421 deletion completed in 6.1079053s • [SLOW TEST:10.233 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:06:13.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:06:13.432: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e" in namespace "downward-api-433" to be "success or failure" Apr 15 13:06:13.435: INFO: Pod "downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.471073ms Apr 15 13:06:15.507: INFO: Pod "downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075503017s Apr 15 13:06:17.512: INFO: Pod "downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079761542s STEP: Saw pod success Apr 15 13:06:17.512: INFO: Pod "downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e" satisfied condition "success or failure" Apr 15 13:06:17.514: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e container client-container: STEP: delete the pod Apr 15 13:06:17.533: INFO: Waiting for pod downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e to disappear Apr 15 13:06:17.537: INFO: Pod downwardapi-volume-f4b66306-93c8-4f2e-b807-f21b2a0d5b5e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:06:17.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-433" for this suite. Apr 15 13:06:23.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:06:23.677: INFO: namespace downward-api-433 deletion completed in 6.137221s • [SLOW TEST:10.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:06:23.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-2f10bef7-9bcf-40a7-a169-75a25c033bc6 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:06:27.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8056" for this suite. Apr 15 13:06:49.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:06:49.928: INFO: namespace configmap-8056 deletion completed in 22.091341575s • [SLOW TEST:26.250 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:06:49.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:06:49.998: INFO: Creating deployment "nginx-deployment" Apr 15 13:06:50.004: INFO: Waiting for observed generation 1 Apr 15 13:06:52.014: INFO: Waiting for all required pods to come up Apr 15 13:06:52.019: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 15 13:07:00.030: INFO: Waiting for deployment "nginx-deployment" to complete Apr 15 13:07:00.036: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 15 13:07:00.043: INFO: Updating deployment nginx-deployment Apr 15 13:07:00.043: INFO: Waiting for observed generation 2 Apr 15 13:07:02.056: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 15 13:07:02.059: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 15 13:07:02.061: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 15 13:07:02.069: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 15 13:07:02.069: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 15 13:07:02.071: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 15 13:07:02.075: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 15 13:07:02.075: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 15 13:07:02.081: INFO: Updating deployment nginx-deployment Apr 15 13:07:02.081: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 15 13:07:02.209: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 15 13:07:02.215: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 15 13:07:02.293: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/deployments/nginx-deployment,UID:7901904e-a240-4676-805e-8843d64e97ee,ResourceVersion:5559230,Generation:3,CreationTimestamp:2020-04-15 13:06:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-04-15 13:07:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-15 13:07:02 +0000 UTC 2020-04-15 13:07:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 15 13:07:02.392: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/nginx-deployment-55fb7cb77f,UID:c41d0f50-d8ad-49cf-af2f-d6deaae0a97d,ResourceVersion:5559274,Generation:3,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7901904e-a240-4676-805e-8843d64e97ee 0xc002b8f9b7 0xc002b8f9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:07:02.392: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 15 13:07:02.393: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/nginx-deployment-7b8c6f4498,UID:6af86d7b-bb04-4a2b-88a8-dfda272aa887,ResourceVersion:5559273,Generation:3,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7901904e-a240-4676-805e-8843d64e97ee 0xc002b8fa97 0xc002b8fa98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 15 13:07:02.482: INFO: Pod "nginx-deployment-55fb7cb77f-4qx97" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4qx97,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-4qx97,UID:11b98220-81f3-4d5e-9a71-cefb7002a9a9,ResourceVersion:5559214,Generation:0,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3a9f7 0xc002a3a9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3aa70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3aa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-15 13:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.482: INFO: Pod "nginx-deployment-55fb7cb77f-8dksj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8dksj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-8dksj,UID:2a48c27d-735a-4861-be62-132e1be44155,ResourceVersion:5559275,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3ab70 0xc002a3ab71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3ac10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3ac30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-ctzws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ctzws,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-ctzws,UID:19701765-d6c5-4bf2-84e5-36895ea056d8,ResourceVersion:5559198,Generation:0,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3acb7 0xc002a3acb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3ad30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3ad50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-15 13:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-fx9tp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fx9tp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-fx9tp,UID:cbf8566b-7d16-4c59-b171-6e16004b3d86,ResourceVersion:5559231,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3ae20 0xc002a3ae21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3aea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3aec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-j79gm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j79gm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-j79gm,UID:74c9d1dc-28ba-4291-961f-aee4ca234799,ResourceVersion:5559190,Generation:0,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3af47 0xc002a3af48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3afc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3afe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-15 13:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-jjhww" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jjhww,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-jjhww,UID:49c0fcad-33b8-4de0-bf96-7d7558cb73c7,ResourceVersion:5559186,Generation:0,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b0b0 0xc002a3b0b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b130} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-15 13:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-kz2s6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kz2s6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-kz2s6,UID:e54728a5-ee13-4669-a017-83ba2dc1e823,ResourceVersion:5559263,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b220 0xc002a3b221}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.483: INFO: Pod "nginx-deployment-55fb7cb77f-l57mk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l57mk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-l57mk,UID:5b1d6cce-0ee1-4b8d-af50-9f3b1e149229,ResourceVersion:5559213,Generation:0,CreationTimestamp:2020-04-15 13:07:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b347 0xc002a3b348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-15 13:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-55fb7cb77f-mdk8q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mdk8q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-mdk8q,UID:386a6f77-f452-4760-bc33-434968f22651,ResourceVersion:5559240,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b4b0 0xc002a3b4b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b530} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-55fb7cb77f-mnh8h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mnh8h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-mnh8h,UID:b2bf1e97-3eca-47af-b8e4-2900dd783e01,ResourceVersion:5559262,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b5d7 0xc002a3b5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-55fb7cb77f-p9q4f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p9q4f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-p9q4f,UID:dcb9fd13-7467-4b41-ac89-cb6b77b5101c,ResourceVersion:5559259,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b707 0xc002a3b708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-55fb7cb77f-vr4jl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vr4jl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-vr4jl,UID:19031d39-385d-40fd-920d-fca7e58c6958,ResourceVersion:5559244,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b827 0xc002a3b828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-55fb7cb77f-ww9x7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ww9x7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-55fb7cb77f-ww9x7,UID:73031124-fe30-49ca-a5ad-68c006b90e48,ResourceVersion:5559260,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c41d0f50-d8ad-49cf-af2f-d6deaae0a97d 0xc002a3b957 0xc002a3b958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3b9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3b9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-7b8c6f4498-27vxx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-27vxx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-27vxx,UID:faef8e33-f94e-4b47-bd06-4c7c92cd3954,ResourceVersion:5559250,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002a3ba77 0xc002a3ba78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3baf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3bb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.484: INFO: Pod "nginx-deployment-7b8c6f4498-2fn6q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2fn6q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-2fn6q,UID:36ba53bf-1fea-44c8-b09c-089704bf2886,ResourceVersion:5559246,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002a3bb97 0xc002a3bb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3bc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3bc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-5w9tp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5w9tp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-5w9tp,UID:0dc352f3-e54d-43b5-89e9-af280947aea7,ResourceVersion:5559266,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002a3bcb7 0xc002a3bcb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3bd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3bd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-76jfw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76jfw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-76jfw,UID:8d4388b3-5226-4c89-999d-2f7314a135c7,ResourceVersion:5559271,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002a3bdd7 0xc002a3bdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3be50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3be70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-8z6kk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8z6kk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-8z6kk,UID:006356c1-eac5-4737-84d3-7c8ee3a5be58,ResourceVersion:5559118,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002a3bf07 0xc002a3bf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032a3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032a3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.234,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://06e5d192d742eb9bf9856799dbc7aa1ffd89a3099ee7c0601da4aab37c64584b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-9bmws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9bmws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-9bmws,UID:5c14db4b-0016-40fc-b22a-e6d3de2b0bae,ResourceVersion:5559252,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc00032a567 0xc00032a568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032a600} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032a620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-blkfw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-blkfw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-blkfw,UID:a6178387-8356-4671-b7bf-f0ed5c26b5c3,ResourceVersion:5559269,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc00032a727 0xc00032a728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032bab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032bae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.485: INFO: Pod "nginx-deployment-7b8c6f4498-bz2hz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bz2hz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-bz2hz,UID:74a1d015-9b28-4ed0-946c-4eb1db484b8e,ResourceVersion:5559270,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc00032bc17 0xc00032bc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032bcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032bd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.486: INFO: Pod "nginx-deployment-7b8c6f4498-d8dnk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8dnk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-d8dnk,UID:0caf4d96-337d-475e-861b-f7fdb2fe8b25,ResourceVersion:5559134,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc00032bdb7 0xc00032bdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032be50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032be70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.236,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://830762076d3190bc042c87daa73c2591263c230a5ba19ad98cc97859be1cfdd2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.486: INFO: Pod "nginx-deployment-7b8c6f4498-fllpp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fllpp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-fllpp,UID:4ecc59ce-30e1-44c4-a9ba-d0f1d353f9d4,ResourceVersion:5559279,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc00032bf97 0xc00032bf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-15 13:07:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.486: INFO: Pod "nginx-deployment-7b8c6f4498-gnjdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gnjdj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-gnjdj,UID:a676dbbc-8016-4065-8684-10904924ce77,ResourceVersion:5559267,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa20f7 0xc002fa20f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa21a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa21c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.486: INFO: Pod "nginx-deployment-7b8c6f4498-jw97f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jw97f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-jw97f,UID:f34a092f-05fc-4aef-be54-dbbb6fec0382,ResourceVersion:5559241,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa2287 0xc002fa2288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.486: INFO: Pod "nginx-deployment-7b8c6f4498-ltn6v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ltn6v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-ltn6v,UID:8e5b690d-7cf4-42f7-bf1b-6d21476daac0,ResourceVersion:5559117,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa23d7 0xc002fa23d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.3,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5624e6dad7e3529b321e9ecd7a2f6d6287489550ad0ba4ec20d20d6110ceeee7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.487: INFO: Pod "nginx-deployment-7b8c6f4498-mpg7l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mpg7l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-mpg7l,UID:728e7e54-1643-487a-b03a-eec281aabfeb,ResourceVersion:5559141,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa25d7 0xc002fa25d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa26f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.5,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e1eb98eb595fa145c6b73e9dad211e5989fef3cc0cdbb227e8f239051ef347ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.487: INFO: Pod "nginx-deployment-7b8c6f4498-ngjl6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ngjl6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-ngjl6,UID:11055e91-ecd6-4857-a2d7-71fe721514c7,ResourceVersion:5559158,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa2bf7 0xc002fa2bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.6,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fcde432e961d1fc8bd2b0f7c2b6f7c9e137c8633a13b5b4b2d069d36f7c90125}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.487: INFO: Pod "nginx-deployment-7b8c6f4498-tcjmw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tcjmw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-tcjmw,UID:0a4cc70a-e35f-4e86-b815-2343b3a73fcb,ResourceVersion:5559249,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa2d67 0xc002fa2d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.487: INFO: Pod "nginx-deployment-7b8c6f4498-tfvv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tfvv5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-tfvv5,UID:c9bd56df-1e55-4b3a-af6a-687ed69260db,ResourceVersion:5559278,Generation:0,CreationTimestamp:2020-04-15 13:07:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa2e87 0xc002fa2e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa2f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa2f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:07:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-15 13:07:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.487: INFO: Pod "nginx-deployment-7b8c6f4498-v4mfp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v4mfp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-v4mfp,UID:0a8cb30f-114c-474c-a862-c588d6384cb1,ResourceVersion:5559101,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa2fe7 0xc002fa2fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa3060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa3080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.2,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5335b57a0bb632663680934498ffcc2acfaafda3b7616d9dcd358793ef6d3ced}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.488: INFO: Pod "nginx-deployment-7b8c6f4498-w9msq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w9msq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-w9msq,UID:121b1503-00af-43b2-b1b6-c2cbf853746d,ResourceVersion:5559124,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa3157 0xc002fa3158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa31d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa31f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.235,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://999dc114359e7b58004734124009310036f9e3851405bf232692c22cfb63175e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 13:07:02.488: INFO: Pod "nginx-deployment-7b8c6f4498-x9dsm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x9dsm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/nginx-deployment-7b8c6f4498-x9dsm,UID:221a7f9a-d4d2-4c88-8034-debceff813d1,ResourceVersion:5559152,Generation:0,CreationTimestamp:2020-04-15 13:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6af86d7b-bb04-4a2b-88a8-dfda272aa887 0xc002fa32c7 0xc002fa32c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dmhwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dmhwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dmhwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa3340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa3360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:06:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.238,StartTime:2020-04-15 13:06:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 13:06:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://59b8a5d37426acce65c9de28b44013e82a71759f95ad46ac035b523c121a886e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:07:02.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4908" for this suite. Apr 15 13:07:18.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:07:18.854: INFO: namespace deployment-4908 deletion completed in 16.304037763s • [SLOW TEST:28.926 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:07:18.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:07:25.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1607" for this suite. Apr 15 13:08:15.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:08:15.445: INFO: namespace kubelet-test-1607 deletion completed in 50.199160902s • [SLOW TEST:56.590 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:08:15.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-9230bf08-1321-407a-993a-be45790c5ff6 STEP: Creating a pod to test consume secrets Apr 15 13:08:15.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f" in namespace "projected-4436" to be "success or failure" Apr 15 13:08:15.558: INFO: Pod "pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.80705ms Apr 15 13:08:17.563: INFO: Pod "pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014140986s Apr 15 13:08:19.567: INFO: Pod "pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018686315s STEP: Saw pod success Apr 15 13:08:19.567: INFO: Pod "pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f" satisfied condition "success or failure" Apr 15 13:08:19.570: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f container projected-secret-volume-test: STEP: delete the pod Apr 15 13:08:19.616: INFO: Waiting for pod pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f to disappear Apr 15 13:08:19.619: INFO: Pod pod-projected-secrets-4db1f7ff-9fe4-4c7f-b12b-a600a0ce435f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:08:19.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4436" for this suite. Apr 15 13:08:25.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:08:25.714: INFO: namespace projected-4436 deletion completed in 6.092531253s • [SLOW TEST:10.269 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:08:25.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8b3aff01-ef20-4bc3-a460-ce7865d902c8 STEP: Creating a pod to test consume secrets Apr 15 13:08:25.846: INFO: Waiting up to 5m0s for pod "pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01" in namespace "secrets-7448" to be "success or failure" Apr 15 13:08:25.850: INFO: Pod "pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935574ms Apr 15 13:08:27.854: INFO: Pod "pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007504911s Apr 15 13:08:29.857: INFO: Pod "pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011089318s STEP: Saw pod success Apr 15 13:08:29.858: INFO: Pod "pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01" satisfied condition "success or failure" Apr 15 13:08:29.860: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01 container secret-volume-test: STEP: delete the pod Apr 15 13:08:29.876: INFO: Waiting for pod pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01 to disappear Apr 15 13:08:29.881: INFO: Pod pod-secrets-5e68e42c-38fb-497f-9060-3b15c0ea9b01 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:08:29.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7448" for this suite. Apr 15 13:08:35.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:08:36.001: INFO: namespace secrets-7448 deletion completed in 6.117199219s • [SLOW TEST:10.286 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:08:36.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 15 13:08:36.091: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 15 13:08:41.096: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:08:42.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5849" for this suite. Apr 15 13:08:48.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:08:48.233: INFO: namespace replication-controller-5849 deletion completed in 6.115220397s • [SLOW TEST:12.231 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:08:48.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:08:48.279: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 15 13:08:48.301: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 15 13:08:53.305: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 15 13:08:53.305: INFO: Creating deployment "test-rolling-update-deployment" Apr 15 13:08:53.309: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 15 13:08:53.325: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 15 13:08:55.375: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 15 13:08:55.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722552933, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722552933, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722552933, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722552933, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:08:57.382: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 15 13:08:57.389: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6781,SelfLink:/apis/apps/v1/namespaces/deployment-6781/deployments/test-rolling-update-deployment,UID:10213efe-1d29-4fbd-b96c-400360eea612,ResourceVersion:5559906,Generation:1,CreationTimestamp:2020-04-15 13:08:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-15 13:08:53 +0000 UTC 2020-04-15 13:08:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-15 13:08:56 +0000 UTC 2020-04-15 13:08:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 15 13:08:57.392: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6781,SelfLink:/apis/apps/v1/namespaces/deployment-6781/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4849ebdd-6e07-4525-ad10-eaf8a2106ff6,ResourceVersion:5559895,Generation:1,CreationTimestamp:2020-04-15 13:08:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 10213efe-1d29-4fbd-b96c-400360eea612 0xc001b3f9e7 0xc001b3f9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 15 13:08:57.392: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 15 13:08:57.392: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6781,SelfLink:/apis/apps/v1/namespaces/deployment-6781/replicasets/test-rolling-update-controller,UID:d89986f7-0024-49ee-849e-cfac314b8b36,ResourceVersion:5559905,Generation:2,CreationTimestamp:2020-04-15 13:08:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 10213efe-1d29-4fbd-b96c-400360eea612 0xc001b3f917 0xc001b3f918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:08:57.395: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-9mvvh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-9mvvh,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6781,SelfLink:/api/v1/namespaces/deployment-6781/pods/test-rolling-update-deployment-79f6b9d75c-9mvvh,UID:1c49e365-f692-4eb7-be6b-17320625e0b4,ResourceVersion:5559894,Generation:0,CreationTimestamp:2020-04-15 13:08:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4849ebdd-6e07-4525-ad10-eaf8a2106ff6 0xc001d1c2e7 0xc001d1c2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dnv6r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnv6r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dnv6r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d1c360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d1c380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:08:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:08:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:08:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:08:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.254,StartTime:2020-04-15 13:08:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-15 13:08:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://582f0eb40c5e77bac62ce758e9c2cfa7a7c13480f71e1d44ee667718fa491057}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:08:57.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6781" for this suite. Apr 15 13:09:05.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:09:05.492: INFO: namespace deployment-6781 deletion completed in 8.09278876s • [SLOW TEST:17.259 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:09:05.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:10:05.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2147" for this suite. Apr 15 13:10:27.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:10:27.716: INFO: namespace container-probe-2147 deletion completed in 22.110623984s • [SLOW TEST:82.223 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:10:27.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 15 13:10:27.772: INFO: Waiting up to 5m0s for pod "client-containers-4abae8c4-0230-41a3-9038-af569c20f074" in namespace "containers-2611" to be "success or failure" Apr 15 13:10:27.788: INFO: Pod "client-containers-4abae8c4-0230-41a3-9038-af569c20f074": Phase="Pending", Reason="", readiness=false. Elapsed: 16.234357ms Apr 15 13:10:29.834: INFO: Pod "client-containers-4abae8c4-0230-41a3-9038-af569c20f074": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062755748s Apr 15 13:10:31.839: INFO: Pod "client-containers-4abae8c4-0230-41a3-9038-af569c20f074": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066969559s STEP: Saw pod success Apr 15 13:10:31.839: INFO: Pod "client-containers-4abae8c4-0230-41a3-9038-af569c20f074" satisfied condition "success or failure" Apr 15 13:10:31.841: INFO: Trying to get logs from node iruya-worker2 pod client-containers-4abae8c4-0230-41a3-9038-af569c20f074 container test-container: STEP: delete the pod Apr 15 13:10:31.911: INFO: Waiting for pod client-containers-4abae8c4-0230-41a3-9038-af569c20f074 to disappear Apr 15 13:10:32.008: INFO: Pod client-containers-4abae8c4-0230-41a3-9038-af569c20f074 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:10:32.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2611" for this suite. Apr 15 13:10:38.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:10:38.123: INFO: namespace containers-2611 deletion completed in 6.111691018s • [SLOW TEST:10.407 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:10:38.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:10:42.267: INFO: Waiting up to 5m0s for pod "client-envvars-601085dd-709e-4b56-a626-a030d132fa5b" in namespace "pods-3391" to be "success or failure" Apr 15 13:10:42.273: INFO: Pod "client-envvars-601085dd-709e-4b56-a626-a030d132fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.853655ms Apr 15 13:10:44.276: INFO: Pod "client-envvars-601085dd-709e-4b56-a626-a030d132fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008995257s Apr 15 13:10:46.279: INFO: Pod "client-envvars-601085dd-709e-4b56-a626-a030d132fa5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011909804s STEP: Saw pod success Apr 15 13:10:46.279: INFO: Pod "client-envvars-601085dd-709e-4b56-a626-a030d132fa5b" satisfied condition "success or failure" Apr 15 13:10:46.281: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-601085dd-709e-4b56-a626-a030d132fa5b container env3cont: STEP: delete the pod Apr 15 13:10:46.308: INFO: Waiting for pod client-envvars-601085dd-709e-4b56-a626-a030d132fa5b to disappear Apr 15 13:10:46.350: INFO: Pod client-envvars-601085dd-709e-4b56-a626-a030d132fa5b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:10:46.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3391" for this suite. Apr 15 13:11:36.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:11:36.438: INFO: namespace pods-3391 deletion completed in 50.083685545s • [SLOW TEST:58.314 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:11:36.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 15 13:11:36.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3850' Apr 15 13:11:38.899: INFO: stderr: "" Apr 15 13:11:38.899: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 15 13:11:39.904: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:39.904: INFO: Found 0 / 1 Apr 15 13:11:40.904: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:40.904: INFO: Found 0 / 1 Apr 15 13:11:41.903: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:41.904: INFO: Found 0 / 1 Apr 15 13:11:42.904: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:42.904: INFO: Found 1 / 1 Apr 15 13:11:42.904: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 15 13:11:42.907: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:42.907: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 15 13:11:42.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-747rz --namespace=kubectl-3850 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 15 13:11:43.011: INFO: stderr: "" Apr 15 13:11:43.011: INFO: stdout: "pod/redis-master-747rz patched\n" STEP: checking annotations Apr 15 13:11:43.017: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:11:43.017: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:11:43.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3850" for this suite. Apr 15 13:12:05.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:12:05.110: INFO: namespace kubectl-3850 deletion completed in 22.090114294s • [SLOW TEST:28.671 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:12:05.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 15 13:12:13.286: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 15 13:12:13.302: INFO: Pod pod-with-poststart-http-hook still exists Apr 15 13:12:15.302: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 15 13:12:15.307: INFO: Pod pod-with-poststart-http-hook still exists Apr 15 13:12:17.302: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 15 13:12:17.306: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:12:17.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1615" for this suite. Apr 15 13:12:39.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:12:39.408: INFO: namespace container-lifecycle-hook-1615 deletion completed in 22.09862154s • [SLOW TEST:34.297 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:12:39.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 15 13:12:44.031: INFO: Successfully updated pod "annotationupdate08119e5b-cdd7-4699-ab3f-55f262399266" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:12:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-560" for this suite. Apr 15 13:13:08.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:13:08.154: INFO: namespace projected-560 deletion completed in 22.102031899s • [SLOW TEST:28.746 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:13:08.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 15 13:13:08.230: INFO: Waiting up to 5m0s for pod "downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778" in namespace "downward-api-5756" to be "success or failure" Apr 15 13:13:08.253: INFO: Pod "downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778": Phase="Pending", Reason="", readiness=false. Elapsed: 23.514408ms Apr 15 13:13:10.258: INFO: Pod "downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027890996s Apr 15 13:13:12.262: INFO: Pod "downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032416935s STEP: Saw pod success Apr 15 13:13:12.262: INFO: Pod "downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778" satisfied condition "success or failure" Apr 15 13:13:12.265: INFO: Trying to get logs from node iruya-worker2 pod downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778 container dapi-container: STEP: delete the pod Apr 15 13:13:12.333: INFO: Waiting for pod downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778 to disappear Apr 15 13:13:12.362: INFO: Pod downward-api-96f12a7f-9f8e-4034-8e09-4a131a092778 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:13:12.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5756" for this suite. Apr 15 13:13:18.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:13:18.521: INFO: namespace downward-api-5756 deletion completed in 6.155752935s • [SLOW TEST:10.368 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:13:18.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 15 13:13:18.567: INFO: Waiting up to 5m0s for pod "pod-3dbdb48b-abbd-4333-ab95-fd20860c654e" in namespace "emptydir-931" to be "success or failure" Apr 15 13:13:18.582: INFO: Pod "pod-3dbdb48b-abbd-4333-ab95-fd20860c654e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.493312ms Apr 15 13:13:20.589: INFO: Pod "pod-3dbdb48b-abbd-4333-ab95-fd20860c654e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021642045s Apr 15 13:13:22.593: INFO: Pod "pod-3dbdb48b-abbd-4333-ab95-fd20860c654e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025949875s STEP: Saw pod success Apr 15 13:13:22.593: INFO: Pod "pod-3dbdb48b-abbd-4333-ab95-fd20860c654e" satisfied condition "success or failure" Apr 15 13:13:22.596: INFO: Trying to get logs from node iruya-worker pod pod-3dbdb48b-abbd-4333-ab95-fd20860c654e container test-container: STEP: delete the pod Apr 15 13:13:22.632: INFO: Waiting for pod pod-3dbdb48b-abbd-4333-ab95-fd20860c654e to disappear Apr 15 13:13:22.635: INFO: Pod pod-3dbdb48b-abbd-4333-ab95-fd20860c654e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:13:22.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-931" for this suite. Apr 15 13:13:28.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:13:28.736: INFO: namespace emptydir-931 deletion completed in 6.098118178s • [SLOW TEST:10.214 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:13:28.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:13:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3600" for this suite. Apr 15 13:13:38.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:13:39.031: INFO: namespace kubelet-test-3600 deletion completed in 6.161598493s • [SLOW TEST:10.295 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:13:39.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 13:13:43.134: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:13:43.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9755" for this suite. Apr 15 13:13:49.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:13:49.256: INFO: namespace container-runtime-9755 deletion completed in 6.085912123s • [SLOW TEST:10.224 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:13:49.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:13:49.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7395" for this suite. Apr 15 13:14:11.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:14:11.501: INFO: namespace pods-7395 deletion completed in 22.112327443s • [SLOW TEST:22.244 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:14:11.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:14:11.594: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:14:15.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6838" for this suite. Apr 15 13:15:09.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:15:09.753: INFO: namespace pods-6838 deletion completed in 54.11366418s • [SLOW TEST:58.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:15:09.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-fb5e5995-d134-4645-ba5c-12b069f5c5d1 STEP: Creating a pod to test consume secrets Apr 15 13:15:09.846: INFO: Waiting up to 5m0s for pod "pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42" in namespace "secrets-5407" to be "success or failure" Apr 15 13:15:09.908: INFO: Pod "pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42": Phase="Pending", Reason="", readiness=false. Elapsed: 62.025163ms Apr 15 13:15:11.939: INFO: Pod "pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093109517s Apr 15 13:15:13.943: INFO: Pod "pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097931335s STEP: Saw pod success Apr 15 13:15:13.944: INFO: Pod "pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42" satisfied condition "success or failure" Apr 15 13:15:13.947: INFO: Trying to get logs from node iruya-worker pod pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42 container secret-volume-test: STEP: delete the pod Apr 15 13:15:13.968: INFO: Waiting for pod pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42 to disappear Apr 15 13:15:13.991: INFO: Pod pod-secrets-73e57f68-97dc-4bbc-80da-252220d4df42 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:15:13.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5407" for this suite. Apr 15 13:15:20.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:15:20.111: INFO: namespace secrets-5407 deletion completed in 6.115957499s • [SLOW TEST:10.357 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:15:20.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 15 13:15:20.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:20.264: INFO: Number of nodes with available pods: 0 Apr 15 13:15:20.264: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:15:21.269: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:21.271: INFO: Number of nodes with available pods: 0 Apr 15 13:15:21.271: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:15:22.269: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:22.273: INFO: Number of nodes with available pods: 0 Apr 15 13:15:22.273: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:15:23.268: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:23.271: INFO: Number of nodes with available pods: 1 Apr 15 13:15:23.271: INFO: Node iruya-worker is running more than one daemon pod Apr 15 13:15:24.268: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:24.271: INFO: Number of nodes with available pods: 2 Apr 15 13:15:24.271: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 15 13:15:24.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:24.291: INFO: Number of nodes with available pods: 1 Apr 15 13:15:24.291: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:25.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:25.300: INFO: Number of nodes with available pods: 1 Apr 15 13:15:25.300: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:26.298: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:26.302: INFO: Number of nodes with available pods: 1 Apr 15 13:15:26.302: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:27.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:27.301: INFO: Number of nodes with available pods: 1 Apr 15 13:15:27.301: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:28.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:28.301: INFO: Number of nodes with available pods: 1 Apr 15 13:15:28.301: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:29.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:29.301: INFO: Number of nodes with available pods: 1 Apr 15 13:15:29.301: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:30.296: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:30.300: INFO: Number of nodes with available pods: 1 Apr 15 13:15:30.301: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:31.298: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:31.301: INFO: Number of nodes with available pods: 1 Apr 15 13:15:31.302: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:32.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:32.301: INFO: Number of nodes with available pods: 1 Apr 15 13:15:32.301: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:33.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:33.303: INFO: Number of nodes with available pods: 1 Apr 15 13:15:33.303: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:34.295: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:34.298: INFO: Number of nodes with available pods: 1 Apr 15 13:15:34.298: INFO: Node iruya-worker2 is running more than one daemon pod Apr 15 13:15:35.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 13:15:35.300: INFO: Number of nodes with available pods: 2 Apr 15 13:15:35.300: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7329, will wait for the garbage collector to delete the pods Apr 15 13:15:35.374: INFO: Deleting DaemonSet.extensions daemon-set took: 16.984099ms Apr 15 13:15:35.674: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.24321ms Apr 15 13:15:42.278: INFO: Number of nodes with available pods: 0 Apr 15 13:15:42.278: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 13:15:42.280: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7329/daemonsets","resourceVersion":"5561142"},"items":null} Apr 15 13:15:42.283: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7329/pods","resourceVersion":"5561142"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:15:42.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7329" for this suite. Apr 15 13:15:48.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:15:48.384: INFO: namespace daemonsets-7329 deletion completed in 6.087042575s • [SLOW TEST:28.273 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:15:48.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:15:48.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2" in namespace "projected-6192" to be "success or failure" Apr 15 13:15:48.454: INFO: Pod "downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576852ms Apr 15 13:15:50.459: INFO: Pod "downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009059699s Apr 15 13:15:52.463: INFO: Pod "downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013750815s STEP: Saw pod success Apr 15 13:15:52.463: INFO: Pod "downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2" satisfied condition "success or failure" Apr 15 13:15:52.467: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2 container client-container: STEP: delete the pod Apr 15 13:15:52.499: INFO: Waiting for pod downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2 to disappear Apr 15 13:15:52.511: INFO: Pod downwardapi-volume-3826feed-f6a7-4215-a304-a3583a87daf2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:15:52.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6192" for this suite. Apr 15 13:15:58.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:15:58.629: INFO: namespace projected-6192 deletion completed in 6.11500591s • [SLOW TEST:10.245 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:15:58.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 15 13:15:58.727: INFO: Waiting up to 5m0s for pod "client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11" in namespace "containers-7125" to be "success or failure" Apr 15 13:15:58.738: INFO: Pod "client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11": Phase="Pending", Reason="", readiness=false. Elapsed: 10.963428ms Apr 15 13:16:00.742: INFO: Pod "client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014607883s Apr 15 13:16:02.747: INFO: Pod "client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019236029s STEP: Saw pod success Apr 15 13:16:02.747: INFO: Pod "client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11" satisfied condition "success or failure" Apr 15 13:16:02.750: INFO: Trying to get logs from node iruya-worker pod client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11 container test-container: STEP: delete the pod Apr 15 13:16:02.764: INFO: Waiting for pod client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11 to disappear Apr 15 13:16:02.820: INFO: Pod client-containers-3e78ee1a-22c2-400a-b473-bb4223167b11 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:16:02.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7125" for this suite. Apr 15 13:16:08.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:16:08.911: INFO: namespace containers-7125 deletion completed in 6.086895321s • [SLOW TEST:10.282 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:16:08.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 15 13:16:08.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4996 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 15 13:16:12.421: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0415 13:16:12.347333 286 log.go:172] (0xc00012cd10) (0xc000866140) Create stream\nI0415 13:16:12.347374 286 log.go:172] (0xc00012cd10) (0xc000866140) Stream added, broadcasting: 1\nI0415 13:16:12.350417 286 log.go:172] (0xc00012cd10) Reply frame received for 1\nI0415 13:16:12.350463 286 log.go:172] (0xc00012cd10) (0xc0006e00a0) Create stream\nI0415 13:16:12.350482 286 log.go:172] (0xc00012cd10) (0xc0006e00a0) Stream added, broadcasting: 3\nI0415 13:16:12.351578 286 log.go:172] (0xc00012cd10) Reply frame received for 3\nI0415 13:16:12.351645 286 log.go:172] (0xc00012cd10) (0xc0006e01e0) Create stream\nI0415 13:16:12.351661 286 log.go:172] (0xc00012cd10) (0xc0006e01e0) Stream added, broadcasting: 5\nI0415 13:16:12.352819 286 log.go:172] (0xc00012cd10) Reply frame received for 5\nI0415 13:16:12.352881 286 log.go:172] (0xc00012cd10) (0xc000866280) Create stream\nI0415 13:16:12.352900 286 log.go:172] (0xc00012cd10) (0xc000866280) Stream added, broadcasting: 7\nI0415 13:16:12.354166 286 log.go:172] (0xc00012cd10) Reply frame received for 7\nI0415 13:16:12.354418 286 log.go:172] (0xc0006e00a0) (3) Writing data frame\nI0415 13:16:12.354592 286 log.go:172] (0xc0006e00a0) (3) Writing data frame\nI0415 13:16:12.355625 286 log.go:172] (0xc00012cd10) Data frame received for 5\nI0415 13:16:12.355651 286 log.go:172] (0xc0006e01e0) (5) Data frame handling\nI0415 13:16:12.355669 286 log.go:172] (0xc0006e01e0) (5) Data frame sent\nI0415 13:16:12.356338 286 log.go:172] (0xc00012cd10) Data frame received for 5\nI0415 13:16:12.356360 286 log.go:172] (0xc0006e01e0) (5) Data frame handling\nI0415 13:16:12.356377 286 log.go:172] (0xc0006e01e0) (5) Data frame sent\nI0415 13:16:12.400674 286 log.go:172] (0xc00012cd10) Data frame received for 7\nI0415 13:16:12.400708 286 log.go:172] (0xc00012cd10) Data frame received for 5\nI0415 13:16:12.400751 286 log.go:172] (0xc0006e01e0) (5) Data frame handling\nI0415 13:16:12.400790 286 log.go:172] (0xc000866280) (7) Data frame handling\nI0415 13:16:12.401452 286 log.go:172] (0xc00012cd10) Data frame received for 1\nI0415 13:16:12.401504 286 log.go:172] (0xc00012cd10) (0xc0006e00a0) Stream removed, broadcasting: 3\nI0415 13:16:12.401626 286 log.go:172] (0xc000866140) (1) Data frame handling\nI0415 13:16:12.401644 286 log.go:172] (0xc000866140) (1) Data frame sent\nI0415 13:16:12.401652 286 log.go:172] (0xc00012cd10) (0xc000866140) Stream removed, broadcasting: 1\nI0415 13:16:12.401660 286 log.go:172] (0xc00012cd10) Go away received\nI0415 13:16:12.401757 286 log.go:172] (0xc00012cd10) (0xc000866140) Stream removed, broadcasting: 1\nI0415 13:16:12.401786 286 log.go:172] (0xc00012cd10) (0xc0006e00a0) Stream removed, broadcasting: 3\nI0415 13:16:12.401804 286 log.go:172] (0xc00012cd10) (0xc0006e01e0) Stream removed, broadcasting: 5\nI0415 13:16:12.401818 286 log.go:172] (0xc00012cd10) (0xc000866280) Stream removed, broadcasting: 7\n" Apr 15 13:16:12.421: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:16:14.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4996" for this suite. Apr 15 13:16:20.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:16:20.571: INFO: namespace kubectl-4996 deletion completed in 6.090463352s • [SLOW TEST:11.658 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:16:20.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 15 13:16:20.643: INFO: Waiting up to 5m0s for pod "pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1" in namespace "emptydir-1467" to be "success or failure" Apr 15 13:16:20.651: INFO: Pod "pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367206ms Apr 15 13:16:22.656: INFO: Pod "pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012946719s Apr 15 13:16:24.660: INFO: Pod "pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017305727s STEP: Saw pod success Apr 15 13:16:24.660: INFO: Pod "pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1" satisfied condition "success or failure" Apr 15 13:16:24.664: INFO: Trying to get logs from node iruya-worker pod pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1 container test-container: STEP: delete the pod Apr 15 13:16:24.696: INFO: Waiting for pod pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1 to disappear Apr 15 13:16:24.705: INFO: Pod pod-043b5bc9-8cfd-497e-839e-d85c4eb8fdf1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:16:24.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1467" for this suite. Apr 15 13:16:30.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:16:30.818: INFO: namespace emptydir-1467 deletion completed in 6.109494048s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:16:30.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 15 13:16:31.391: INFO: created pod pod-service-account-defaultsa Apr 15 13:16:31.391: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 15 13:16:31.398: INFO: created pod pod-service-account-mountsa Apr 15 13:16:31.398: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 15 13:16:31.431: INFO: created pod pod-service-account-nomountsa Apr 15 13:16:31.431: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 15 13:16:31.452: INFO: created pod pod-service-account-defaultsa-mountspec Apr 15 13:16:31.453: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 15 13:16:31.479: INFO: created pod pod-service-account-mountsa-mountspec Apr 15 13:16:31.479: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 15 13:16:31.552: INFO: created pod pod-service-account-nomountsa-mountspec Apr 15 13:16:31.552: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 15 13:16:31.572: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 15 13:16:31.572: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 15 13:16:31.602: INFO: created pod pod-service-account-mountsa-nomountspec Apr 15 13:16:31.602: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 15 13:16:31.636: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 15 13:16:31.636: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:16:31.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9559" for this suite. Apr 15 13:16:59.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:16:59.887: INFO: namespace svcaccounts-9559 deletion completed in 28.16005972s • [SLOW TEST:29.068 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:16:59.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[] Apr 15 13:17:00.031: INFO: Get endpoints failed (36.055554ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 15 13:17:01.035: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[] (1.040060496s elapsed) STEP: Creating pod pod1 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod1:[80]] Apr 15 13:17:04.143: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod1:[80]] (3.101086224s elapsed) STEP: Creating pod pod2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod1:[80] pod2:[80]] Apr 15 13:17:08.254: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod1:[80] pod2:[80]] (4.105292499s elapsed) STEP: Deleting pod pod1 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod2:[80]] Apr 15 13:17:09.324: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod2:[80]] (1.064842094s elapsed) STEP: Deleting pod pod2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[] Apr 15 13:17:10.351: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[] (1.021956973s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:17:10.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2900" for this suite. Apr 15 13:17:32.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:17:32.520: INFO: namespace services-2900 deletion completed in 22.105503477s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.633 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:17:32.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:17:32.573: INFO: Creating deployment "test-recreate-deployment" Apr 15 13:17:32.585: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 15 13:17:32.614: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 15 13:17:34.620: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 15 13:17:34.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722553452, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722553452, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722553452, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722553452, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:17:36.626: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 15 13:17:36.632: INFO: Updating deployment test-recreate-deployment Apr 15 13:17:36.632: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 15 13:17:36.869: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7198,SelfLink:/apis/apps/v1/namespaces/deployment-7198/deployments/test-recreate-deployment,UID:b9c51de0-dee7-4b3f-b76d-9a2cb9709003,ResourceVersion:5561680,Generation:2,CreationTimestamp:2020-04-15 13:17:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-15 13:17:36 +0000 UTC 2020-04-15 13:17:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-15 13:17:36 +0000 UTC 2020-04-15 13:17:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 15 13:17:36.873: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7198,SelfLink:/apis/apps/v1/namespaces/deployment-7198/replicasets/test-recreate-deployment-5c8c9cc69d,UID:70f70dde-b583-4a98-a396-beb9b5c38280,ResourceVersion:5561677,Generation:1,CreationTimestamp:2020-04-15 13:17:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b9c51de0-dee7-4b3f-b76d-9a2cb9709003 0xc002b61977 0xc002b61978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:17:36.873: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 15 13:17:36.873: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7198,SelfLink:/apis/apps/v1/namespaces/deployment-7198/replicasets/test-recreate-deployment-6df85df6b9,UID:593a5e5a-9fc1-4785-bb52-27e8711813dd,ResourceVersion:5561669,Generation:2,CreationTimestamp:2020-04-15 13:17:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b9c51de0-dee7-4b3f-b76d-9a2cb9709003 0xc002b61a47 0xc002b61a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:17:37.062: INFO: Pod "test-recreate-deployment-5c8c9cc69d-c6mpq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-c6mpq,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7198,SelfLink:/api/v1/namespaces/deployment-7198/pods/test-recreate-deployment-5c8c9cc69d-c6mpq,UID:9ea4db5f-f227-4aae-b224-299f48679fe9,ResourceVersion:5561681,Generation:0,CreationTimestamp:2020-04-15 13:17:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 70f70dde-b583-4a98-a396-beb9b5c38280 0xc000d00357 0xc000d00358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rfrfp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rfrfp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rfrfp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d003e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d00400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:17:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:17:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:17:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:17:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-15 13:17:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:17:37.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7198" for this suite. Apr 15 13:17:43.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:17:43.164: INFO: namespace deployment-7198 deletion completed in 6.097627762s • [SLOW TEST:10.643 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:17:43.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 15 13:17:43.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1887' Apr 15 13:17:43.505: INFO: stderr: "" Apr 15 13:17:43.505: INFO: stdout: "pod/pause created\n" Apr 15 13:17:43.505: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 15 13:17:43.505: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1887" to be "running and ready" Apr 15 13:17:43.546: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 41.176146ms Apr 15 13:17:45.550: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045393497s Apr 15 13:17:47.555: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.049583755s Apr 15 13:17:47.555: INFO: Pod "pause" satisfied condition "running and ready" Apr 15 13:17:47.555: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 15 13:17:47.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1887' Apr 15 13:17:47.677: INFO: stderr: "" Apr 15 13:17:47.677: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 15 13:17:47.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1887' Apr 15 13:17:47.784: INFO: stderr: "" Apr 15 13:17:47.784: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 15 13:17:47.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1887' Apr 15 13:17:47.895: INFO: stderr: "" Apr 15 13:17:47.895: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 15 13:17:47.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1887' Apr 15 13:17:47.993: INFO: stderr: "" Apr 15 13:17:47.993: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 15 13:17:47.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1887' Apr 15 13:17:48.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 13:17:48.110: INFO: stdout: "pod \"pause\" force deleted\n" Apr 15 13:17:48.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1887' Apr 15 13:17:48.211: INFO: stderr: "No resources found.\n" Apr 15 13:17:48.211: INFO: stdout: "" Apr 15 13:17:48.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:17:48.307: INFO: stderr: "" Apr 15 13:17:48.307: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:17:48.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1887" for this suite. Apr 15 13:17:54.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:17:54.430: INFO: namespace kubectl-1887 deletion completed in 6.118927019s • [SLOW TEST:11.266 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:17:54.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:17:54.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8" in namespace "projected-3820" to be "success or failure" Apr 15 13:17:54.531: INFO: Pod "downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.589714ms Apr 15 13:17:56.540: INFO: Pod "downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024767202s Apr 15 13:17:58.545: INFO: Pod "downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029439344s STEP: Saw pod success Apr 15 13:17:58.545: INFO: Pod "downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8" satisfied condition "success or failure" Apr 15 13:17:58.548: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8 container client-container: STEP: delete the pod Apr 15 13:17:58.582: INFO: Waiting for pod downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8 to disappear Apr 15 13:17:58.606: INFO: Pod downwardapi-volume-af913ca8-97b1-4d27-ab0e-c47d7c9c84e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:17:58.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3820" for this suite. Apr 15 13:18:04.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:18:04.701: INFO: namespace projected-3820 deletion completed in 6.091909313s • [SLOW TEST:10.272 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:18:04.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6dfff361-28b7-462c-a383-2b70ff3c20b6 STEP: Creating a pod to test consume secrets Apr 15 13:18:04.770: INFO: Waiting up to 5m0s for pod "pod-secrets-544cce40-509f-41f4-b4db-076807840d4f" in namespace "secrets-4547" to be "success or failure" Apr 15 13:18:04.777: INFO: Pod "pod-secrets-544cce40-509f-41f4-b4db-076807840d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058093ms Apr 15 13:18:06.781: INFO: Pod "pod-secrets-544cce40-509f-41f4-b4db-076807840d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010568268s Apr 15 13:18:08.785: INFO: Pod "pod-secrets-544cce40-509f-41f4-b4db-076807840d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014915877s STEP: Saw pod success Apr 15 13:18:08.785: INFO: Pod "pod-secrets-544cce40-509f-41f4-b4db-076807840d4f" satisfied condition "success or failure" Apr 15 13:18:08.788: INFO: Trying to get logs from node iruya-worker pod pod-secrets-544cce40-509f-41f4-b4db-076807840d4f container secret-volume-test: STEP: delete the pod Apr 15 13:18:08.806: INFO: Waiting for pod pod-secrets-544cce40-509f-41f4-b4db-076807840d4f to disappear Apr 15 13:18:08.810: INFO: Pod pod-secrets-544cce40-509f-41f4-b4db-076807840d4f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:18:08.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4547" for this suite. Apr 15 13:18:14.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:18:14.884: INFO: namespace secrets-4547 deletion completed in 6.07191837s • [SLOW TEST:10.183 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:18:14.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-jzz8 STEP: Creating a pod to test atomic-volume-subpath Apr 15 13:18:14.968: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jzz8" in namespace "subpath-4548" to be "success or failure" Apr 15 13:18:14.971: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.837865ms Apr 15 13:18:16.976: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008174156s Apr 15 13:18:18.980: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012475842s Apr 15 13:18:20.984: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 6.016754058s Apr 15 13:18:22.988: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 8.020331727s Apr 15 13:18:24.992: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 10.024020434s Apr 15 13:18:26.996: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 12.028484683s Apr 15 13:18:29.000: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 14.032831296s Apr 15 13:18:31.005: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 16.037389072s Apr 15 13:18:33.009: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 18.041839505s Apr 15 13:18:35.014: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 20.046042911s Apr 15 13:18:37.018: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Running", Reason="", readiness=true. Elapsed: 22.050293216s Apr 15 13:18:39.022: INFO: Pod "pod-subpath-test-projected-jzz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053947512s STEP: Saw pod success Apr 15 13:18:39.022: INFO: Pod "pod-subpath-test-projected-jzz8" satisfied condition "success or failure" Apr 15 13:18:39.024: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-jzz8 container test-container-subpath-projected-jzz8: STEP: delete the pod Apr 15 13:18:39.042: INFO: Waiting for pod pod-subpath-test-projected-jzz8 to disappear Apr 15 13:18:39.047: INFO: Pod pod-subpath-test-projected-jzz8 no longer exists STEP: Deleting pod pod-subpath-test-projected-jzz8 Apr 15 13:18:39.047: INFO: Deleting pod "pod-subpath-test-projected-jzz8" in namespace "subpath-4548" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:18:39.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4548" for this suite. Apr 15 13:18:45.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:18:45.140: INFO: namespace subpath-4548 deletion completed in 6.089276361s • [SLOW TEST:30.255 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:18:45.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 15 13:18:45.199: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:19:02.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7031" for this suite. Apr 15 13:19:08.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:19:08.291: INFO: namespace pods-7031 deletion completed in 6.105980428s • [SLOW TEST:23.151 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:19:08.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:19:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1330" for this suite. Apr 15 13:19:35.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:19:35.519: INFO: namespace replication-controller-1330 deletion completed in 22.09292438s • [SLOW TEST:27.228 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:19:35.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 15 13:19:35.577: INFO: Waiting up to 5m0s for pod "downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac" in namespace "downward-api-8538" to be "success or failure" Apr 15 13:19:35.581: INFO: Pod "downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185391ms Apr 15 13:19:37.584: INFO: Pod "downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007897485s Apr 15 13:19:39.589: INFO: Pod "downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012359327s STEP: Saw pod success Apr 15 13:19:39.589: INFO: Pod "downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac" satisfied condition "success or failure" Apr 15 13:19:39.592: INFO: Trying to get logs from node iruya-worker pod downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac container dapi-container: STEP: delete the pod Apr 15 13:19:39.612: INFO: Waiting for pod downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac to disappear Apr 15 13:19:39.616: INFO: Pod downward-api-151abfe2-8c74-4121-bb8a-b92443cad5ac no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:19:39.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8538" for this suite. Apr 15 13:19:45.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:19:45.704: INFO: namespace downward-api-8538 deletion completed in 6.085327625s • [SLOW TEST:10.185 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:19:45.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 15 13:19:45.752: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:19:45.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3156" for this suite. Apr 15 13:19:51.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:19:51.934: INFO: namespace kubectl-3156 deletion completed in 6.087725957s • [SLOW TEST:6.229 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:19:51.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-67c2a7a6-76c8-49ed-943a-02984ddf024b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:19:52.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1243" for this suite. Apr 15 13:19:58.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:19:58.123: INFO: namespace secrets-1243 deletion completed in 6.112908792s • [SLOW TEST:6.189 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:19:58.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 15 13:19:58.195: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562187,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 15 13:19:58.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562187,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 15 13:20:08.204: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562208,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 15 13:20:08.204: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562208,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 15 13:20:18.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562228,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 15 13:20:18.212: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562228,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 15 13:20:28.219: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562248,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 15 13:20:28.219: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-a,UID:8a972ff0-0471-499d-939f-5ad2cbbe8574,ResourceVersion:5562248,Generation:0,CreationTimestamp:2020-04-15 13:19:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 15 13:20:38.227: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-b,UID:27e2cb96-2901-4a78-a8bf-a8c41ab909c4,ResourceVersion:5562269,Generation:0,CreationTimestamp:2020-04-15 13:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 15 13:20:38.227: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-b,UID:27e2cb96-2901-4a78-a8bf-a8c41ab909c4,ResourceVersion:5562269,Generation:0,CreationTimestamp:2020-04-15 13:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 15 13:20:48.240: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-b,UID:27e2cb96-2901-4a78-a8bf-a8c41ab909c4,ResourceVersion:5562290,Generation:0,CreationTimestamp:2020-04-15 13:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 15 13:20:48.240: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3768,SelfLink:/api/v1/namespaces/watch-3768/configmaps/e2e-watch-test-configmap-b,UID:27e2cb96-2901-4a78-a8bf-a8c41ab909c4,ResourceVersion:5562290,Generation:0,CreationTimestamp:2020-04-15 13:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:20:58.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3768" for this suite. Apr 15 13:21:04.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:21:04.398: INFO: namespace watch-3768 deletion completed in 6.153935576s • [SLOW TEST:66.275 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:21:04.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0585d335-2788-4032-a813-5340e6f99278 STEP: Creating a pod to test consume configMaps Apr 15 13:21:04.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b" in namespace "configmap-6298" to be "success or failure" Apr 15 13:21:04.485: INFO: Pod "pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.633764ms Apr 15 13:21:06.489: INFO: Pod "pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018617209s Apr 15 13:21:08.493: INFO: Pod "pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022798893s STEP: Saw pod success Apr 15 13:21:08.493: INFO: Pod "pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b" satisfied condition "success or failure" Apr 15 13:21:08.495: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b container configmap-volume-test: STEP: delete the pod Apr 15 13:21:08.532: INFO: Waiting for pod pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b to disappear Apr 15 13:21:08.550: INFO: Pod pod-configmaps-394f6e9f-1fb8-4ed9-93eb-69d42b07137b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:21:08.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6298" for this suite. Apr 15 13:21:14.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:21:14.643: INFO: namespace configmap-6298 deletion completed in 6.089928291s • [SLOW TEST:10.245 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:21:14.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0415 13:21:45.227764 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 13:21:45.227: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:21:45.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5332" for this suite. Apr 15 13:21:51.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:21:51.317: INFO: namespace gc-5332 deletion completed in 6.087123389s • [SLOW TEST:36.674 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:21:51.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-849400c3-ad46-4ec5-8936-d909e1e7f828 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-849400c3-ad46-4ec5-8936-d909e1e7f828 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:23:27.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9574" for this suite. Apr 15 13:23:49.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:23:50.047: INFO: namespace configmap-9574 deletion completed in 22.106752657s • [SLOW TEST:118.729 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:23:50.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:23:55.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1128" for this suite. Apr 15 13:24:01.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:24:01.827: INFO: namespace watch-1128 deletion completed in 6.173176982s • [SLOW TEST:11.780 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:24:01.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 15 13:24:01.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1316' Apr 15 13:24:04.253: INFO: stderr: "" Apr 15 13:24:04.253: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 15 13:24:05.257: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:24:05.258: INFO: Found 0 / 1 Apr 15 13:24:06.258: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:24:06.258: INFO: Found 0 / 1 Apr 15 13:24:07.282: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:24:07.282: INFO: Found 1 / 1 Apr 15 13:24:07.282: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 15 13:24:07.285: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:24:07.285: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 15 13:24:07.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316' Apr 15 13:24:07.387: INFO: stderr: "" Apr 15 13:24:07.387: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Apr 13:24:06.834 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Apr 13:24:06.834 # Server started, Redis version 3.2.12\n1:M 15 Apr 13:24:06.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Apr 13:24:06.834 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 15 13:24:07.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316 --tail=1' Apr 15 13:24:07.481: INFO: stderr: "" Apr 15 13:24:07.481: INFO: stdout: "1:M 15 Apr 13:24:06.834 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 15 13:24:07.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316 --limit-bytes=1' Apr 15 13:24:07.577: INFO: stderr: "" Apr 15 13:24:07.577: INFO: stdout: " " STEP: exposing timestamps Apr 15 13:24:07.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316 --tail=1 --timestamps' Apr 15 13:24:07.683: INFO: stderr: "" Apr 15 13:24:07.683: INFO: stdout: "2020-04-15T13:24:06.834587264Z 1:M 15 Apr 13:24:06.834 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 15 13:24:10.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316 --since=1s' Apr 15 13:24:10.300: INFO: stderr: "" Apr 15 13:24:10.300: INFO: stdout: "" Apr 15 13:24:10.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j6gr redis-master --namespace=kubectl-1316 --since=24h' Apr 15 13:24:10.413: INFO: stderr: "" Apr 15 13:24:10.413: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Apr 13:24:06.834 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Apr 13:24:06.834 # Server started, Redis version 3.2.12\n1:M 15 Apr 13:24:06.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Apr 13:24:06.834 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 15 13:24:10.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1316' Apr 15 13:24:10.509: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 13:24:10.509: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 15 13:24:10.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1316' Apr 15 13:24:10.614: INFO: stderr: "No resources found.\n" Apr 15 13:24:10.614: INFO: stdout: "" Apr 15 13:24:10.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1316 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:24:10.702: INFO: stderr: "" Apr 15 13:24:10.702: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:24:10.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1316" for this suite. Apr 15 13:24:32.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:24:32.819: INFO: namespace kubectl-1316 deletion completed in 22.113627586s • [SLOW TEST:30.991 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:24:32.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 in namespace container-probe-2145 Apr 15 13:24:36.905: INFO: Started pod liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 in namespace container-probe-2145 STEP: checking the pod's current state and verifying that restartCount is present Apr 15 13:24:36.908: INFO: Initial restart count of pod liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is 0 Apr 15 13:24:48.935: INFO: Restart count of pod container-probe-2145/liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is now 1 (12.027306949s elapsed) Apr 15 13:25:08.977: INFO: Restart count of pod container-probe-2145/liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is now 2 (32.06892552s elapsed) Apr 15 13:25:31.020: INFO: Restart count of pod container-probe-2145/liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is now 3 (54.1128024s elapsed) Apr 15 13:25:49.059: INFO: Restart count of pod container-probe-2145/liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is now 4 (1m12.151040006s elapsed) Apr 15 13:26:59.209: INFO: Restart count of pod container-probe-2145/liveness-5d1e7ffd-d621-4535-a73b-98c833b34963 is now 5 (2m22.301581013s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:26:59.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2145" for this suite. Apr 15 13:27:05.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:27:05.340: INFO: namespace container-probe-2145 deletion completed in 6.093731351s • [SLOW TEST:152.521 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:27:05.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 15 13:27:09.992: INFO: Successfully updated pod "pod-update-activedeadlineseconds-26a7eb80-b80a-40b2-a44b-62cfc2466030" Apr 15 13:27:09.992: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-26a7eb80-b80a-40b2-a44b-62cfc2466030" in namespace "pods-7024" to be "terminated due to deadline exceeded" Apr 15 13:27:10.000: INFO: Pod "pod-update-activedeadlineseconds-26a7eb80-b80a-40b2-a44b-62cfc2466030": Phase="Running", Reason="", readiness=true. Elapsed: 7.626993ms Apr 15 13:27:12.004: INFO: Pod "pod-update-activedeadlineseconds-26a7eb80-b80a-40b2-a44b-62cfc2466030": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01165673s Apr 15 13:27:12.004: INFO: Pod "pod-update-activedeadlineseconds-26a7eb80-b80a-40b2-a44b-62cfc2466030" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:27:12.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7024" for this suite. Apr 15 13:27:18.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:27:18.090: INFO: namespace pods-7024 deletion completed in 6.08135648s • [SLOW TEST:12.749 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:27:18.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:27:18.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901" in namespace "downward-api-9499" to be "success or failure" Apr 15 13:27:18.218: INFO: Pod "downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901": Phase="Pending", Reason="", readiness=false. Elapsed: 26.271146ms Apr 15 13:27:20.222: INFO: Pod "downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030190237s Apr 15 13:27:22.226: INFO: Pod "downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03420296s STEP: Saw pod success Apr 15 13:27:22.226: INFO: Pod "downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901" satisfied condition "success or failure" Apr 15 13:27:22.231: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901 container client-container: STEP: delete the pod Apr 15 13:27:22.253: INFO: Waiting for pod downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901 to disappear Apr 15 13:27:22.272: INFO: Pod downwardapi-volume-a19d730b-2c7e-4140-ad8d-45d1c42ff901 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:27:22.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9499" for this suite. Apr 15 13:27:28.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:27:28.389: INFO: namespace downward-api-9499 deletion completed in 6.11235049s • [SLOW TEST:10.298 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:27:28.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 15 13:27:28.435: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:27:33.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8047" for this suite. Apr 15 13:27:39.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:27:39.618: INFO: namespace init-container-8047 deletion completed in 6.083516901s • [SLOW TEST:11.229 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:27:39.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-af6c2cfd-04b9-4141-abf4-30f3f98eb421 in namespace container-probe-7457 Apr 15 13:27:43.726: INFO: Started pod busybox-af6c2cfd-04b9-4141-abf4-30f3f98eb421 in namespace container-probe-7457 STEP: checking the pod's current state and verifying that restartCount is present Apr 15 13:27:43.729: INFO: Initial restart count of pod busybox-af6c2cfd-04b9-4141-abf4-30f3f98eb421 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:31:44.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7457" for this suite. Apr 15 13:31:50.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:31:50.637: INFO: namespace container-probe-7457 deletion completed in 6.11675425s • [SLOW TEST:251.019 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:31:50.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-65eed2b0-1df7-4bc6-9439-ed4a3df11495 STEP: Creating a pod to test consume secrets Apr 15 13:31:50.708: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728" in namespace "projected-815" to be "success or failure" Apr 15 13:31:50.756: INFO: Pod "pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728": Phase="Pending", Reason="", readiness=false. Elapsed: 47.542864ms Apr 15 13:31:52.759: INFO: Pod "pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051229864s Apr 15 13:31:54.764: INFO: Pod "pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055507932s STEP: Saw pod success Apr 15 13:31:54.764: INFO: Pod "pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728" satisfied condition "success or failure" Apr 15 13:31:54.767: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728 container projected-secret-volume-test: STEP: delete the pod Apr 15 13:31:54.799: INFO: Waiting for pod pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728 to disappear Apr 15 13:31:54.812: INFO: Pod pod-projected-secrets-588d6bba-6d7b-40cb-a240-8bd17138b728 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:31:54.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-815" for this suite. Apr 15 13:32:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:32:00.918: INFO: namespace projected-815 deletion completed in 6.101492455s • [SLOW TEST:10.280 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:32:00.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 15 13:32:01.001: INFO: Waiting up to 5m0s for pod "pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0" in namespace "emptydir-9975" to be "success or failure" Apr 15 13:32:01.004: INFO: Pod "pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324294ms Apr 15 13:32:03.008: INFO: Pod "pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007553394s Apr 15 13:32:05.012: INFO: Pod "pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011662406s STEP: Saw pod success Apr 15 13:32:05.013: INFO: Pod "pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0" satisfied condition "success or failure" Apr 15 13:32:05.015: INFO: Trying to get logs from node iruya-worker pod pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0 container test-container: STEP: delete the pod Apr 15 13:32:05.054: INFO: Waiting for pod pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0 to disappear Apr 15 13:32:05.059: INFO: Pod pod-6780c758-8c9e-4065-8f7a-e1634b64e0c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:32:05.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9975" for this suite. Apr 15 13:32:11.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:32:11.154: INFO: namespace emptydir-9975 deletion completed in 6.091471454s • [SLOW TEST:10.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:32:11.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 13:32:11.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8681' Apr 15 13:32:11.478: INFO: stderr: "" Apr 15 13:32:11.478: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 15 13:32:11.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8681' Apr 15 13:32:21.864: INFO: stderr: "" Apr 15 13:32:21.864: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:32:21.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8681" for this suite. Apr 15 13:32:27.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:32:27.978: INFO: namespace kubectl-8681 deletion completed in 6.109852465s • [SLOW TEST:16.823 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:32:27.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b1a063e1-0373-4d18-9842-409be791f094 STEP: Creating a pod to test consume configMaps Apr 15 13:32:28.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3" in namespace "configmap-2237" to be "success or failure" Apr 15 13:32:28.055: INFO: Pod "pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214302ms Apr 15 13:32:30.059: INFO: Pod "pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098809s Apr 15 13:32:32.063: INFO: Pod "pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012554678s STEP: Saw pod success Apr 15 13:32:32.063: INFO: Pod "pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3" satisfied condition "success or failure" Apr 15 13:32:32.066: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3 container configmap-volume-test: STEP: delete the pod Apr 15 13:32:32.094: INFO: Waiting for pod pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3 to disappear Apr 15 13:32:32.106: INFO: Pod pod-configmaps-2d4a5b8e-81cb-4bd2-8761-0b1e42c600e3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:32:32.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2237" for this suite. Apr 15 13:32:38.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:32:38.249: INFO: namespace configmap-2237 deletion completed in 6.13889022s • [SLOW TEST:10.270 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:32:38.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 15 13:32:45.006: INFO: 8 pods remaining Apr 15 13:32:45.006: INFO: 0 pods has nil DeletionTimestamp Apr 15 13:32:45.006: INFO: Apr 15 13:32:46.290: INFO: 0 pods remaining Apr 15 13:32:46.290: INFO: 0 pods has nil DeletionTimestamp Apr 15 13:32:46.290: INFO: STEP: Gathering metrics W0415 13:32:46.775632 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 13:32:46.775: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:32:46.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7685" for this suite. Apr 15 13:32:53.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:32:53.104: INFO: namespace gc-7685 deletion completed in 6.262973369s • [SLOW TEST:14.855 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:32:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 15 13:32:53.715: INFO: Pod name wrapped-volume-race-665ff6d6-c2dc-4fcd-83c6-c7267c52f129: Found 0 pods out of 5 Apr 15 13:32:58.724: INFO: Pod name wrapped-volume-race-665ff6d6-c2dc-4fcd-83c6-c7267c52f129: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-665ff6d6-c2dc-4fcd-83c6-c7267c52f129 in namespace emptydir-wrapper-6509, will wait for the garbage collector to delete the pods Apr 15 13:33:10.844: INFO: Deleting ReplicationController wrapped-volume-race-665ff6d6-c2dc-4fcd-83c6-c7267c52f129 took: 8.497631ms Apr 15 13:33:11.144: INFO: Terminating ReplicationController wrapped-volume-race-665ff6d6-c2dc-4fcd-83c6-c7267c52f129 pods took: 300.28309ms STEP: Creating RC which spawns configmap-volume pods Apr 15 13:33:52.530: INFO: Pod name wrapped-volume-race-734bc4ca-9ccd-442c-80a2-2c515fb03a82: Found 0 pods out of 5 Apr 15 13:33:57.536: INFO: Pod name wrapped-volume-race-734bc4ca-9ccd-442c-80a2-2c515fb03a82: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-734bc4ca-9ccd-442c-80a2-2c515fb03a82 in namespace emptydir-wrapper-6509, will wait for the garbage collector to delete the pods Apr 15 13:34:11.620: INFO: Deleting ReplicationController wrapped-volume-race-734bc4ca-9ccd-442c-80a2-2c515fb03a82 took: 6.858235ms Apr 15 13:34:11.920: INFO: Terminating ReplicationController wrapped-volume-race-734bc4ca-9ccd-442c-80a2-2c515fb03a82 pods took: 300.266463ms STEP: Creating RC which spawns configmap-volume pods Apr 15 13:34:52.345: INFO: Pod name wrapped-volume-race-84fabd6a-2524-47c2-9cd5-b8bd47c2d58e: Found 0 pods out of 5 Apr 15 13:34:57.397: INFO: Pod name wrapped-volume-race-84fabd6a-2524-47c2-9cd5-b8bd47c2d58e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-84fabd6a-2524-47c2-9cd5-b8bd47c2d58e in namespace emptydir-wrapper-6509, will wait for the garbage collector to delete the pods Apr 15 13:35:11.485: INFO: Deleting ReplicationController wrapped-volume-race-84fabd6a-2524-47c2-9cd5-b8bd47c2d58e took: 6.958896ms Apr 15 13:35:11.785: INFO: Terminating ReplicationController wrapped-volume-race-84fabd6a-2524-47c2-9cd5-b8bd47c2d58e pods took: 300.350668ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:35:52.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6509" for this suite. Apr 15 13:36:00.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:36:00.952: INFO: namespace emptydir-wrapper-6509 deletion completed in 8.110698591s • [SLOW TEST:187.848 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:36:00.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0415 13:36:12.864956 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 13:36:12.865: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:36:12.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2624" for this suite. Apr 15 13:36:20.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:36:20.962: INFO: namespace gc-2624 deletion completed in 8.093894272s • [SLOW TEST:20.009 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:36:20.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 15 13:36:21.015: INFO: Waiting up to 5m0s for pod "client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5" in namespace "containers-9791" to be "success or failure" Apr 15 13:36:21.018: INFO: Pod "client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.679845ms Apr 15 13:36:23.031: INFO: Pod "client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015806858s Apr 15 13:36:25.035: INFO: Pod "client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020231136s STEP: Saw pod success Apr 15 13:36:25.035: INFO: Pod "client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5" satisfied condition "success or failure" Apr 15 13:36:25.038: INFO: Trying to get logs from node iruya-worker2 pod client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5 container test-container: STEP: delete the pod Apr 15 13:36:25.069: INFO: Waiting for pod client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5 to disappear Apr 15 13:36:25.078: INFO: Pod client-containers-910f7bdb-c055-4a6f-b70c-96ae8f6a32a5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:36:25.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9791" for this suite. Apr 15 13:36:31.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:36:31.189: INFO: namespace containers-9791 deletion completed in 6.107016394s • [SLOW TEST:10.227 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:36:31.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 15 13:36:31.249: INFO: Waiting up to 5m0s for pod "pod-3c7c2e24-1302-49c4-bb08-59cf989cb636" in namespace "emptydir-6830" to be "success or failure" Apr 15 13:36:31.264: INFO: Pod "pod-3c7c2e24-1302-49c4-bb08-59cf989cb636": Phase="Pending", Reason="", readiness=false. Elapsed: 15.265813ms Apr 15 13:36:33.268: INFO: Pod "pod-3c7c2e24-1302-49c4-bb08-59cf989cb636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019100891s Apr 15 13:36:35.272: INFO: Pod "pod-3c7c2e24-1302-49c4-bb08-59cf989cb636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0234847s STEP: Saw pod success Apr 15 13:36:35.272: INFO: Pod "pod-3c7c2e24-1302-49c4-bb08-59cf989cb636" satisfied condition "success or failure" Apr 15 13:36:35.275: INFO: Trying to get logs from node iruya-worker2 pod pod-3c7c2e24-1302-49c4-bb08-59cf989cb636 container test-container: STEP: delete the pod Apr 15 13:36:35.309: INFO: Waiting for pod pod-3c7c2e24-1302-49c4-bb08-59cf989cb636 to disappear Apr 15 13:36:35.346: INFO: Pod pod-3c7c2e24-1302-49c4-bb08-59cf989cb636 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:36:35.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6830" for this suite. Apr 15 13:36:41.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:36:41.458: INFO: namespace emptydir-6830 deletion completed in 6.108409805s • [SLOW TEST:10.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:36:41.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1034 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 15 13:36:41.567: INFO: Found 0 stateful pods, waiting for 3 Apr 15 13:36:51.572: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:36:51.572: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:36:51.572: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:36:51.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1034 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 13:36:54.008: INFO: stderr: "I0415 13:36:53.878046 741 log.go:172] (0xc000b4a370) (0xc0006c6aa0) Create stream\nI0415 13:36:53.878092 741 log.go:172] (0xc000b4a370) (0xc0006c6aa0) Stream added, broadcasting: 1\nI0415 13:36:53.879486 741 log.go:172] (0xc000b4a370) Reply frame received for 1\nI0415 13:36:53.879518 741 log.go:172] (0xc000b4a370) (0xc0008c4000) Create stream\nI0415 13:36:53.879526 741 log.go:172] (0xc000b4a370) (0xc0008c4000) Stream added, broadcasting: 3\nI0415 13:36:53.880272 741 log.go:172] (0xc000b4a370) Reply frame received for 3\nI0415 13:36:53.880312 741 log.go:172] (0xc000b4a370) (0xc0008c40a0) Create stream\nI0415 13:36:53.880323 741 log.go:172] (0xc000b4a370) (0xc0008c40a0) Stream added, broadcasting: 5\nI0415 13:36:53.881251 741 log.go:172] (0xc000b4a370) Reply frame received for 5\nI0415 13:36:53.968467 741 log.go:172] (0xc000b4a370) Data frame received for 5\nI0415 13:36:53.968494 741 log.go:172] (0xc0008c40a0) (5) Data frame handling\nI0415 13:36:53.968510 741 log.go:172] (0xc0008c40a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 13:36:53.998793 741 log.go:172] (0xc000b4a370) Data frame received for 3\nI0415 13:36:53.998824 741 log.go:172] (0xc0008c4000) (3) Data frame handling\nI0415 13:36:53.998845 741 log.go:172] (0xc0008c4000) (3) Data frame sent\nI0415 13:36:53.998855 741 log.go:172] (0xc000b4a370) Data frame received for 3\nI0415 13:36:53.998866 741 log.go:172] (0xc0008c4000) (3) Data frame handling\nI0415 13:36:53.999241 741 log.go:172] (0xc000b4a370) Data frame received for 5\nI0415 13:36:53.999275 741 log.go:172] (0xc0008c40a0) (5) Data frame handling\nI0415 13:36:54.001490 741 log.go:172] (0xc000b4a370) Data frame received for 1\nI0415 13:36:54.001519 741 log.go:172] (0xc0006c6aa0) (1) Data frame handling\nI0415 13:36:54.001536 741 log.go:172] (0xc0006c6aa0) (1) Data frame sent\nI0415 13:36:54.001556 741 log.go:172] (0xc000b4a370) (0xc0006c6aa0) Stream removed, broadcasting: 1\nI0415 13:36:54.001618 741 log.go:172] (0xc000b4a370) Go away received\nI0415 13:36:54.001949 741 log.go:172] (0xc000b4a370) (0xc0006c6aa0) Stream removed, broadcasting: 1\nI0415 13:36:54.001971 741 log.go:172] (0xc000b4a370) (0xc0008c4000) Stream removed, broadcasting: 3\nI0415 13:36:54.001986 741 log.go:172] (0xc000b4a370) (0xc0008c40a0) Stream removed, broadcasting: 5\n" Apr 15 13:36:54.009: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 13:36:54.009: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 15 13:37:04.040: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 15 13:37:14.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1034 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 13:37:14.333: INFO: stderr: "I0415 13:37:14.248181 776 log.go:172] (0xc000116dc0) (0xc00077c640) Create stream\nI0415 13:37:14.248231 776 log.go:172] (0xc000116dc0) (0xc00077c640) Stream added, broadcasting: 1\nI0415 13:37:14.250556 776 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0415 13:37:14.250613 776 log.go:172] (0xc000116dc0) (0xc0005dc1e0) Create stream\nI0415 13:37:14.250630 776 log.go:172] (0xc000116dc0) (0xc0005dc1e0) Stream added, broadcasting: 3\nI0415 13:37:14.251671 776 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0415 13:37:14.251726 776 log.go:172] (0xc000116dc0) (0xc00077e000) Create stream\nI0415 13:37:14.251761 776 log.go:172] (0xc000116dc0) (0xc00077e000) Stream added, broadcasting: 5\nI0415 13:37:14.252772 776 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0415 13:37:14.327251 776 log.go:172] (0xc000116dc0) Data frame received for 5\nI0415 13:37:14.327294 776 log.go:172] (0xc00077e000) (5) Data frame handling\nI0415 13:37:14.327303 776 log.go:172] (0xc00077e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 13:37:14.327340 776 log.go:172] (0xc000116dc0) Data frame received for 3\nI0415 13:37:14.327376 776 log.go:172] (0xc0005dc1e0) (3) Data frame handling\nI0415 13:37:14.327409 776 log.go:172] (0xc0005dc1e0) (3) Data frame sent\nI0415 13:37:14.327430 776 log.go:172] (0xc000116dc0) Data frame received for 3\nI0415 13:37:14.327446 776 log.go:172] (0xc0005dc1e0) (3) Data frame handling\nI0415 13:37:14.327464 776 log.go:172] (0xc000116dc0) Data frame received for 5\nI0415 13:37:14.327478 776 log.go:172] (0xc00077e000) (5) Data frame handling\nI0415 13:37:14.328818 776 log.go:172] (0xc000116dc0) Data frame received for 1\nI0415 13:37:14.328831 776 log.go:172] (0xc00077c640) (1) Data frame handling\nI0415 13:37:14.328840 776 log.go:172] (0xc00077c640) (1) Data frame sent\nI0415 13:37:14.328936 776 log.go:172] (0xc000116dc0) (0xc00077c640) Stream removed, broadcasting: 1\nI0415 13:37:14.329043 776 log.go:172] (0xc000116dc0) Go away received\nI0415 13:37:14.329283 776 log.go:172] (0xc000116dc0) (0xc00077c640) Stream removed, broadcasting: 1\nI0415 13:37:14.329296 776 log.go:172] (0xc000116dc0) (0xc0005dc1e0) Stream removed, broadcasting: 3\nI0415 13:37:14.329302 776 log.go:172] (0xc000116dc0) (0xc00077e000) Stream removed, broadcasting: 5\n" Apr 15 13:37:14.333: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 13:37:14.333: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 13:37:24.354: INFO: Waiting for StatefulSet statefulset-1034/ss2 to complete update Apr 15 13:37:24.354: INFO: Waiting for Pod statefulset-1034/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 15 13:37:24.354: INFO: Waiting for Pod statefulset-1034/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 15 13:37:34.362: INFO: Waiting for StatefulSet statefulset-1034/ss2 to complete update Apr 15 13:37:34.362: INFO: Waiting for Pod statefulset-1034/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 15 13:37:44.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1034 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 13:37:44.620: INFO: stderr: "I0415 13:37:44.497825 798 log.go:172] (0xc0009a6420) (0xc000596820) Create stream\nI0415 13:37:44.497889 798 log.go:172] (0xc0009a6420) (0xc000596820) Stream added, broadcasting: 1\nI0415 13:37:44.501775 798 log.go:172] (0xc0009a6420) Reply frame received for 1\nI0415 13:37:44.501824 798 log.go:172] (0xc0009a6420) (0xc000596000) Create stream\nI0415 13:37:44.501840 798 log.go:172] (0xc0009a6420) (0xc000596000) Stream added, broadcasting: 3\nI0415 13:37:44.502904 798 log.go:172] (0xc0009a6420) Reply frame received for 3\nI0415 13:37:44.502936 798 log.go:172] (0xc0009a6420) (0xc0001f0280) Create stream\nI0415 13:37:44.502947 798 log.go:172] (0xc0009a6420) (0xc0001f0280) Stream added, broadcasting: 5\nI0415 13:37:44.504127 798 log.go:172] (0xc0009a6420) Reply frame received for 5\nI0415 13:37:44.583025 798 log.go:172] (0xc0009a6420) Data frame received for 5\nI0415 13:37:44.583066 798 log.go:172] (0xc0001f0280) (5) Data frame handling\nI0415 13:37:44.583091 798 log.go:172] (0xc0001f0280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 13:37:44.612018 798 log.go:172] (0xc0009a6420) Data frame received for 5\nI0415 13:37:44.612066 798 log.go:172] (0xc0001f0280) (5) Data frame handling\nI0415 13:37:44.612092 798 log.go:172] (0xc0009a6420) Data frame received for 3\nI0415 13:37:44.612103 798 log.go:172] (0xc000596000) (3) Data frame handling\nI0415 13:37:44.612123 798 log.go:172] (0xc000596000) (3) Data frame sent\nI0415 13:37:44.612138 798 log.go:172] (0xc0009a6420) Data frame received for 3\nI0415 13:37:44.612151 798 log.go:172] (0xc000596000) (3) Data frame handling\nI0415 13:37:44.613694 798 log.go:172] (0xc0009a6420) Data frame received for 1\nI0415 13:37:44.613722 798 log.go:172] (0xc000596820) (1) Data frame handling\nI0415 13:37:44.613737 798 log.go:172] (0xc000596820) (1) Data frame sent\nI0415 13:37:44.613760 798 log.go:172] (0xc0009a6420) (0xc000596820) Stream removed, broadcasting: 1\nI0415 13:37:44.613788 798 log.go:172] (0xc0009a6420) Go away received\nI0415 13:37:44.614188 798 log.go:172] (0xc0009a6420) (0xc000596820) Stream removed, broadcasting: 1\nI0415 13:37:44.614212 798 log.go:172] (0xc0009a6420) (0xc000596000) Stream removed, broadcasting: 3\nI0415 13:37:44.614224 798 log.go:172] (0xc0009a6420) (0xc0001f0280) Stream removed, broadcasting: 5\n" Apr 15 13:37:44.620: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 13:37:44.620: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 13:37:54.652: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 15 13:38:04.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1034 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 13:38:04.898: INFO: stderr: "I0415 13:38:04.805729 818 log.go:172] (0xc000116f20) (0xc0005a8aa0) Create stream\nI0415 13:38:04.805797 818 log.go:172] (0xc000116f20) (0xc0005a8aa0) Stream added, broadcasting: 1\nI0415 13:38:04.808638 818 log.go:172] (0xc000116f20) Reply frame received for 1\nI0415 13:38:04.808672 818 log.go:172] (0xc000116f20) (0xc000a56000) Create stream\nI0415 13:38:04.808682 818 log.go:172] (0xc000116f20) (0xc000a56000) Stream added, broadcasting: 3\nI0415 13:38:04.810036 818 log.go:172] (0xc000116f20) Reply frame received for 3\nI0415 13:38:04.810110 818 log.go:172] (0xc000116f20) (0xc000908000) Create stream\nI0415 13:38:04.810160 818 log.go:172] (0xc000116f20) (0xc000908000) Stream added, broadcasting: 5\nI0415 13:38:04.811292 818 log.go:172] (0xc000116f20) Reply frame received for 5\nI0415 13:38:04.893393 818 log.go:172] (0xc000116f20) Data frame received for 5\nI0415 13:38:04.893421 818 log.go:172] (0xc000908000) (5) Data frame handling\nI0415 13:38:04.893428 818 log.go:172] (0xc000908000) (5) Data frame sent\nI0415 13:38:04.893433 818 log.go:172] (0xc000116f20) Data frame received for 5\nI0415 13:38:04.893437 818 log.go:172] (0xc000908000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 13:38:04.893452 818 log.go:172] (0xc000116f20) Data frame received for 3\nI0415 13:38:04.893457 818 log.go:172] (0xc000a56000) (3) Data frame handling\nI0415 13:38:04.893462 818 log.go:172] (0xc000a56000) (3) Data frame sent\nI0415 13:38:04.893466 818 log.go:172] (0xc000116f20) Data frame received for 3\nI0415 13:38:04.893470 818 log.go:172] (0xc000a56000) (3) Data frame handling\nI0415 13:38:04.894494 818 log.go:172] (0xc000116f20) Data frame received for 1\nI0415 13:38:04.894512 818 log.go:172] (0xc0005a8aa0) (1) Data frame handling\nI0415 13:38:04.894523 818 log.go:172] (0xc0005a8aa0) (1) Data frame sent\nI0415 13:38:04.894532 818 log.go:172] (0xc000116f20) (0xc0005a8aa0) Stream removed, broadcasting: 1\nI0415 13:38:04.894612 818 log.go:172] (0xc000116f20) Go away received\nI0415 13:38:04.895248 818 log.go:172] (0xc000116f20) (0xc0005a8aa0) Stream removed, broadcasting: 1\nI0415 13:38:04.895270 818 log.go:172] (0xc000116f20) (0xc000a56000) Stream removed, broadcasting: 3\nI0415 13:38:04.895291 818 log.go:172] (0xc000116f20) (0xc000908000) Stream removed, broadcasting: 5\n" Apr 15 13:38:04.898: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 13:38:04.898: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 13:38:14.920: INFO: Waiting for StatefulSet statefulset-1034/ss2 to complete update Apr 15 13:38:14.920: INFO: Waiting for Pod statefulset-1034/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 15 13:38:14.920: INFO: Waiting for Pod statefulset-1034/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 15 13:38:24.961: INFO: Waiting for StatefulSet statefulset-1034/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 15 13:38:34.929: INFO: Deleting all statefulset in ns statefulset-1034 Apr 15 13:38:34.932: INFO: Scaling statefulset ss2 to 0 Apr 15 13:38:54.969: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 13:38:54.971: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:38:54.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1034" for this suite. Apr 15 13:39:01.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:39:01.090: INFO: namespace statefulset-1034 deletion completed in 6.100313481s • [SLOW TEST:139.631 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:39:01.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5632 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 13:39:01.134: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 15 13:39:25.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.49 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5632 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:39:25.275: INFO: >>> kubeConfig: /root/.kube/config I0415 13:39:25.310527 6 log.go:172] (0xc0016a18c0) (0xc001693360) Create stream I0415 13:39:25.310557 6 log.go:172] (0xc0016a18c0) (0xc001693360) Stream added, broadcasting: 1 I0415 13:39:25.313044 6 log.go:172] (0xc0016a18c0) Reply frame received for 1 I0415 13:39:25.313090 6 log.go:172] (0xc0016a18c0) (0xc001693400) Create stream I0415 13:39:25.313106 6 log.go:172] (0xc0016a18c0) (0xc001693400) Stream added, broadcasting: 3 I0415 13:39:25.314578 6 log.go:172] (0xc0016a18c0) Reply frame received for 3 I0415 13:39:25.314621 6 log.go:172] (0xc0016a18c0) (0xc0016934a0) Create stream I0415 13:39:25.314636 6 log.go:172] (0xc0016a18c0) (0xc0016934a0) Stream added, broadcasting: 5 I0415 13:39:25.315648 6 log.go:172] (0xc0016a18c0) Reply frame received for 5 I0415 13:39:26.376410 6 log.go:172] (0xc0016a18c0) Data frame received for 5 I0415 13:39:26.376460 6 log.go:172] (0xc0016934a0) (5) Data frame handling I0415 13:39:26.376486 6 log.go:172] (0xc0016a18c0) Data frame received for 3 I0415 13:39:26.376504 6 log.go:172] (0xc001693400) (3) Data frame handling I0415 13:39:26.376531 6 log.go:172] (0xc001693400) (3) Data frame sent I0415 13:39:26.376547 6 log.go:172] (0xc0016a18c0) Data frame received for 3 I0415 13:39:26.376571 6 log.go:172] (0xc001693400) (3) Data frame handling I0415 13:39:26.379187 6 log.go:172] (0xc0016a18c0) Data frame received for 1 I0415 13:39:26.379215 6 log.go:172] (0xc001693360) (1) Data frame handling I0415 13:39:26.379228 6 log.go:172] (0xc001693360) (1) Data frame sent I0415 13:39:26.379247 6 log.go:172] (0xc0016a18c0) (0xc001693360) Stream removed, broadcasting: 1 I0415 13:39:26.379371 6 log.go:172] (0xc0016a18c0) Go away received I0415 13:39:26.379444 6 log.go:172] (0xc0016a18c0) (0xc001693360) Stream removed, broadcasting: 1 I0415 13:39:26.379493 6 log.go:172] (0xc0016a18c0) (0xc001693400) Stream removed, broadcasting: 3 I0415 13:39:26.379511 6 log.go:172] (0xc0016a18c0) (0xc0016934a0) Stream removed, broadcasting: 5 Apr 15 13:39:26.379: INFO: Found all expected endpoints: [netserver-0] Apr 15 13:39:26.382: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.80 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5632 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:39:26.382: INFO: >>> kubeConfig: /root/.kube/config I0415 13:39:26.419435 6 log.go:172] (0xc002ea2a50) (0xc002c160a0) Create stream I0415 13:39:26.419461 6 log.go:172] (0xc002ea2a50) (0xc002c160a0) Stream added, broadcasting: 1 I0415 13:39:26.422472 6 log.go:172] (0xc002ea2a50) Reply frame received for 1 I0415 13:39:26.422502 6 log.go:172] (0xc002ea2a50) (0xc0030121e0) Create stream I0415 13:39:26.422510 6 log.go:172] (0xc002ea2a50) (0xc0030121e0) Stream added, broadcasting: 3 I0415 13:39:26.423462 6 log.go:172] (0xc002ea2a50) Reply frame received for 3 I0415 13:39:26.423498 6 log.go:172] (0xc002ea2a50) (0xc002ee00a0) Create stream I0415 13:39:26.423513 6 log.go:172] (0xc002ea2a50) (0xc002ee00a0) Stream added, broadcasting: 5 I0415 13:39:26.424393 6 log.go:172] (0xc002ea2a50) Reply frame received for 5 I0415 13:39:27.502390 6 log.go:172] (0xc002ea2a50) Data frame received for 5 I0415 13:39:27.502449 6 log.go:172] (0xc002ee00a0) (5) Data frame handling I0415 13:39:27.502492 6 log.go:172] (0xc002ea2a50) Data frame received for 3 I0415 13:39:27.502513 6 log.go:172] (0xc0030121e0) (3) Data frame handling I0415 13:39:27.502549 6 log.go:172] (0xc0030121e0) (3) Data frame sent I0415 13:39:27.502705 6 log.go:172] (0xc002ea2a50) Data frame received for 3 I0415 13:39:27.502735 6 log.go:172] (0xc0030121e0) (3) Data frame handling I0415 13:39:27.504954 6 log.go:172] (0xc002ea2a50) Data frame received for 1 I0415 13:39:27.504984 6 log.go:172] (0xc002c160a0) (1) Data frame handling I0415 13:39:27.505008 6 log.go:172] (0xc002c160a0) (1) Data frame sent I0415 13:39:27.505031 6 log.go:172] (0xc002ea2a50) (0xc002c160a0) Stream removed, broadcasting: 1 I0415 13:39:27.505049 6 log.go:172] (0xc002ea2a50) Go away received I0415 13:39:27.505342 6 log.go:172] (0xc002ea2a50) (0xc002c160a0) Stream removed, broadcasting: 1 I0415 13:39:27.505368 6 log.go:172] (0xc002ea2a50) (0xc0030121e0) Stream removed, broadcasting: 3 I0415 13:39:27.505379 6 log.go:172] (0xc002ea2a50) (0xc002ee00a0) Stream removed, broadcasting: 5 Apr 15 13:39:27.505: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:39:27.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5632" for this suite. Apr 15 13:39:47.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:39:47.592: INFO: namespace pod-network-test-5632 deletion completed in 20.083068358s • [SLOW TEST:46.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:39:47.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ef93252b-ffc4-4287-92f7-51f1971650db STEP: Creating a pod to test consume secrets Apr 15 13:39:47.654: INFO: Waiting up to 5m0s for pod "pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28" in namespace "secrets-1048" to be "success or failure" Apr 15 13:39:47.657: INFO: Pod "pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.229432ms Apr 15 13:39:49.661: INFO: Pod "pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308262s Apr 15 13:39:51.666: INFO: Pod "pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012008254s STEP: Saw pod success Apr 15 13:39:51.666: INFO: Pod "pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28" satisfied condition "success or failure" Apr 15 13:39:51.669: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28 container secret-env-test: STEP: delete the pod Apr 15 13:39:51.688: INFO: Waiting for pod pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28 to disappear Apr 15 13:39:51.721: INFO: Pod pod-secrets-1585f091-f580-4dd3-86ec-db9d269b4e28 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:39:51.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1048" for this suite. Apr 15 13:39:57.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:39:57.820: INFO: namespace secrets-1048 deletion completed in 6.09453838s • [SLOW TEST:10.228 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:39:57.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:39:57.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50" in namespace "projected-9976" to be "success or failure" Apr 15 13:39:57.910: INFO: Pod "downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043461ms Apr 15 13:39:59.914: INFO: Pod "downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022273911s Apr 15 13:40:01.918: INFO: Pod "downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026784855s STEP: Saw pod success Apr 15 13:40:01.918: INFO: Pod "downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50" satisfied condition "success or failure" Apr 15 13:40:01.921: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50 container client-container: STEP: delete the pod Apr 15 13:40:01.945: INFO: Waiting for pod downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50 to disappear Apr 15 13:40:01.949: INFO: Pod downwardapi-volume-60ffdffd-0e93-42ea-b2e4-ddd9ef59fb50 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:40:01.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9976" for this suite. Apr 15 13:40:07.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:40:08.049: INFO: namespace projected-9976 deletion completed in 6.097726372s • [SLOW TEST:10.229 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:40:08.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:40:08.093: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:40:12.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5168" for this suite. Apr 15 13:40:52.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:40:52.377: INFO: namespace pods-5168 deletion completed in 40.125655521s • [SLOW TEST:44.327 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:40:52.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:40:52.483: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.617284ms) Apr 15 13:40:52.486: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.946585ms) Apr 15 13:40:52.489: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.797912ms) Apr 15 13:40:52.492: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.862479ms) Apr 15 13:40:52.495: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.937294ms) Apr 15 13:40:52.498: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.429861ms) Apr 15 13:40:52.500: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.708175ms) Apr 15 13:40:52.503: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.679279ms) Apr 15 13:40:52.506: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.017427ms) Apr 15 13:40:52.509: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.218785ms) Apr 15 13:40:52.512: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.990885ms) Apr 15 13:40:52.515: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.132245ms) Apr 15 13:40:52.519: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.125472ms) Apr 15 13:40:52.522: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.027987ms) Apr 15 13:40:52.525: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.272896ms) Apr 15 13:40:52.528: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.151201ms) Apr 15 13:40:52.532: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.476029ms) Apr 15 13:40:52.535: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.712246ms) Apr 15 13:40:52.539: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.605603ms) Apr 15 13:40:52.543: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.685179ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:40:52.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8829" for this suite. Apr 15 13:40:58.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:40:58.639: INFO: namespace proxy-8829 deletion completed in 6.092652844s • [SLOW TEST:6.261 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:40:58.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 15 13:40:58.723: INFO: Waiting up to 5m0s for pod "pod-fb0502ca-1044-49fa-9afa-50532232aa1b" in namespace "emptydir-2100" to be "success or failure" Apr 15 13:40:58.728: INFO: Pod "pod-fb0502ca-1044-49fa-9afa-50532232aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14881ms Apr 15 13:41:00.732: INFO: Pod "pod-fb0502ca-1044-49fa-9afa-50532232aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008422596s Apr 15 13:41:02.736: INFO: Pod "pod-fb0502ca-1044-49fa-9afa-50532232aa1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012389894s STEP: Saw pod success Apr 15 13:41:02.736: INFO: Pod "pod-fb0502ca-1044-49fa-9afa-50532232aa1b" satisfied condition "success or failure" Apr 15 13:41:02.739: INFO: Trying to get logs from node iruya-worker2 pod pod-fb0502ca-1044-49fa-9afa-50532232aa1b container test-container: STEP: delete the pod Apr 15 13:41:02.771: INFO: Waiting for pod pod-fb0502ca-1044-49fa-9afa-50532232aa1b to disappear Apr 15 13:41:02.782: INFO: Pod pod-fb0502ca-1044-49fa-9afa-50532232aa1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:41:02.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2100" for this suite. Apr 15 13:41:08.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:41:08.890: INFO: namespace emptydir-2100 deletion completed in 6.104227678s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:41:08.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:41:38.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7667" for this suite. Apr 15 13:41:44.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:41:44.405: INFO: namespace container-runtime-7667 deletion completed in 6.090337804s • [SLOW TEST:35.515 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:41:44.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca Apr 15 13:41:44.472: INFO: Pod name my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca: Found 0 pods out of 1 Apr 15 13:41:49.489: INFO: Pod name my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca: Found 1 pods out of 1 Apr 15 13:41:49.489: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca" are running Apr 15 13:41:49.492: INFO: Pod "my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca-tlg4p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 13:41:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 13:41:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 13:41:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 13:41:44 +0000 UTC Reason: Message:}]) Apr 15 13:41:49.492: INFO: Trying to dial the pod Apr 15 13:41:54.504: INFO: Controller my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca: Got expected result from replica 1 [my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca-tlg4p]: "my-hostname-basic-f27b2e47-7ca2-431f-8775-893e7a0ca5ca-tlg4p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:41:54.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8832" for this suite. Apr 15 13:42:00.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:42:00.602: INFO: namespace replication-controller-8832 deletion completed in 6.093409014s • [SLOW TEST:16.196 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:42:00.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8109 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 13:42:00.682: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 15 13:42:26.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.54:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8109 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:42:26.794: INFO: >>> kubeConfig: /root/.kube/config I0415 13:42:26.830931 6 log.go:172] (0xc002ea2a50) (0xc001bf43c0) Create stream I0415 13:42:26.830993 6 log.go:172] (0xc002ea2a50) (0xc001bf43c0) Stream added, broadcasting: 1 I0415 13:42:26.836177 6 log.go:172] (0xc002ea2a50) Reply frame received for 1 I0415 13:42:26.836225 6 log.go:172] (0xc002ea2a50) (0xc0030a5180) Create stream I0415 13:42:26.836236 6 log.go:172] (0xc002ea2a50) (0xc0030a5180) Stream added, broadcasting: 3 I0415 13:42:26.837495 6 log.go:172] (0xc002ea2a50) Reply frame received for 3 I0415 13:42:26.837544 6 log.go:172] (0xc002ea2a50) (0xc001bf4500) Create stream I0415 13:42:26.837586 6 log.go:172] (0xc002ea2a50) (0xc001bf4500) Stream added, broadcasting: 5 I0415 13:42:26.838539 6 log.go:172] (0xc002ea2a50) Reply frame received for 5 I0415 13:42:26.936836 6 log.go:172] (0xc002ea2a50) Data frame received for 5 I0415 13:42:26.936868 6 log.go:172] (0xc001bf4500) (5) Data frame handling I0415 13:42:26.936916 6 log.go:172] (0xc002ea2a50) Data frame received for 3 I0415 13:42:26.936963 6 log.go:172] (0xc0030a5180) (3) Data frame handling I0415 13:42:26.936997 6 log.go:172] (0xc0030a5180) (3) Data frame sent I0415 13:42:26.937021 6 log.go:172] (0xc002ea2a50) Data frame received for 3 I0415 13:42:26.937039 6 log.go:172] (0xc0030a5180) (3) Data frame handling I0415 13:42:26.938967 6 log.go:172] (0xc002ea2a50) Data frame received for 1 I0415 13:42:26.938995 6 log.go:172] (0xc001bf43c0) (1) Data frame handling I0415 13:42:26.939015 6 log.go:172] (0xc001bf43c0) (1) Data frame sent I0415 13:42:26.939032 6 log.go:172] (0xc002ea2a50) (0xc001bf43c0) Stream removed, broadcasting: 1 I0415 13:42:26.939048 6 log.go:172] (0xc002ea2a50) Go away received I0415 13:42:26.939175 6 log.go:172] (0xc002ea2a50) (0xc001bf43c0) Stream removed, broadcasting: 1 I0415 13:42:26.939189 6 log.go:172] (0xc002ea2a50) (0xc0030a5180) Stream removed, broadcasting: 3 I0415 13:42:26.939196 6 log.go:172] (0xc002ea2a50) (0xc001bf4500) Stream removed, broadcasting: 5 Apr 15 13:42:26.939: INFO: Found all expected endpoints: [netserver-0] Apr 15 13:42:26.942: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.86:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8109 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:42:26.942: INFO: >>> kubeConfig: /root/.kube/config I0415 13:42:26.978734 6 log.go:172] (0xc000dd6630) (0xc00032e820) Create stream I0415 13:42:26.978761 6 log.go:172] (0xc000dd6630) (0xc00032e820) Stream added, broadcasting: 1 I0415 13:42:26.980460 6 log.go:172] (0xc000dd6630) Reply frame received for 1 I0415 13:42:26.980508 6 log.go:172] (0xc000dd6630) (0xc00227a3c0) Create stream I0415 13:42:26.980522 6 log.go:172] (0xc000dd6630) (0xc00227a3c0) Stream added, broadcasting: 3 I0415 13:42:26.981740 6 log.go:172] (0xc000dd6630) Reply frame received for 3 I0415 13:42:26.981771 6 log.go:172] (0xc000dd6630) (0xc0027380a0) Create stream I0415 13:42:26.981781 6 log.go:172] (0xc000dd6630) (0xc0027380a0) Stream added, broadcasting: 5 I0415 13:42:26.982869 6 log.go:172] (0xc000dd6630) Reply frame received for 5 I0415 13:42:27.047245 6 log.go:172] (0xc000dd6630) Data frame received for 3 I0415 13:42:27.047285 6 log.go:172] (0xc00227a3c0) (3) Data frame handling I0415 13:42:27.047299 6 log.go:172] (0xc00227a3c0) (3) Data frame sent I0415 13:42:27.047311 6 log.go:172] (0xc000dd6630) Data frame received for 3 I0415 13:42:27.047330 6 log.go:172] (0xc00227a3c0) (3) Data frame handling I0415 13:42:27.047419 6 log.go:172] (0xc000dd6630) Data frame received for 5 I0415 13:42:27.047442 6 log.go:172] (0xc0027380a0) (5) Data frame handling I0415 13:42:27.049543 6 log.go:172] (0xc000dd6630) Data frame received for 1 I0415 13:42:27.049593 6 log.go:172] (0xc00032e820) (1) Data frame handling I0415 13:42:27.049620 6 log.go:172] (0xc00032e820) (1) Data frame sent I0415 13:42:27.049650 6 log.go:172] (0xc000dd6630) (0xc00032e820) Stream removed, broadcasting: 1 I0415 13:42:27.049678 6 log.go:172] (0xc000dd6630) Go away received I0415 13:42:27.049746 6 log.go:172] (0xc000dd6630) (0xc00032e820) Stream removed, broadcasting: 1 I0415 13:42:27.049771 6 log.go:172] (0xc000dd6630) (0xc00227a3c0) Stream removed, broadcasting: 3 I0415 13:42:27.049796 6 log.go:172] (0xc000dd6630) (0xc0027380a0) Stream removed, broadcasting: 5 Apr 15 13:42:27.049: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:42:27.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8109" for this suite. Apr 15 13:42:51.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:42:51.150: INFO: namespace pod-network-test-8109 deletion completed in 24.09503325s • [SLOW TEST:50.548 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:42:51.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 15 13:42:55.780: INFO: Successfully updated pod "annotationupdateeef5c0ff-7ada-4734-9169-456f842065bd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:42:57.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1208" for this suite. Apr 15 13:43:19.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:43:19.914: INFO: namespace downward-api-1208 deletion completed in 22.096931394s • [SLOW TEST:28.764 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:43:19.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9854.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9854.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9854.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9854.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9854.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9854.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 13:43:26.052: INFO: DNS probes using dns-9854/dns-test-61e4d5c8-d192-4a95-af15-895dd248f360 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:43:26.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9854" for this suite. Apr 15 13:43:32.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:43:32.267: INFO: namespace dns-9854 deletion completed in 6.138860275s • [SLOW TEST:12.353 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:43:32.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:43:52.331: INFO: Container started at 2020-04-15 13:43:34 +0000 UTC, pod became ready at 2020-04-15 13:43:50 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:43:52.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5992" for this suite. Apr 15 13:44:14.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:44:14.421: INFO: namespace container-probe-5992 deletion completed in 22.085394835s • [SLOW TEST:42.153 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:44:14.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:44:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5365" for this suite. Apr 15 13:44:20.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:44:20.648: INFO: namespace services-5365 deletion completed in 6.108716101s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.227 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:44:20.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 15 13:44:30.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:30.744: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:30.784991 6 log.go:172] (0xc00238a840) (0xc002ee19a0) Create stream I0415 13:44:30.785016 6 log.go:172] (0xc00238a840) (0xc002ee19a0) Stream added, broadcasting: 1 I0415 13:44:30.786906 6 log.go:172] (0xc00238a840) Reply frame received for 1 I0415 13:44:30.786959 6 log.go:172] (0xc00238a840) (0xc0009f03c0) Create stream I0415 13:44:30.786975 6 log.go:172] (0xc00238a840) (0xc0009f03c0) Stream added, broadcasting: 3 I0415 13:44:30.788001 6 log.go:172] (0xc00238a840) Reply frame received for 3 I0415 13:44:30.788053 6 log.go:172] (0xc00238a840) (0xc0009f05a0) Create stream I0415 13:44:30.788069 6 log.go:172] (0xc00238a840) (0xc0009f05a0) Stream added, broadcasting: 5 I0415 13:44:30.789039 6 log.go:172] (0xc00238a840) Reply frame received for 5 I0415 13:44:30.865311 6 log.go:172] (0xc00238a840) Data frame received for 5 I0415 13:44:30.865348 6 log.go:172] (0xc0009f05a0) (5) Data frame handling I0415 13:44:30.865377 6 log.go:172] (0xc00238a840) Data frame received for 3 I0415 13:44:30.865394 6 log.go:172] (0xc0009f03c0) (3) Data frame handling I0415 13:44:30.865415 6 log.go:172] (0xc0009f03c0) (3) Data frame sent I0415 13:44:30.865429 6 log.go:172] (0xc00238a840) Data frame received for 3 I0415 13:44:30.865442 6 log.go:172] (0xc0009f03c0) (3) Data frame handling I0415 13:44:30.867369 6 log.go:172] (0xc00238a840) Data frame received for 1 I0415 13:44:30.867416 6 log.go:172] (0xc002ee19a0) (1) Data frame handling I0415 13:44:30.867441 6 log.go:172] (0xc002ee19a0) (1) Data frame sent I0415 13:44:30.867457 6 log.go:172] (0xc00238a840) (0xc002ee19a0) Stream removed, broadcasting: 1 I0415 13:44:30.867494 6 log.go:172] (0xc00238a840) Go away received I0415 13:44:30.867669 6 log.go:172] (0xc00238a840) (0xc002ee19a0) Stream removed, broadcasting: 1 I0415 13:44:30.867706 6 log.go:172] (0xc00238a840) (0xc0009f03c0) Stream removed, broadcasting: 3 I0415 13:44:30.867734 6 log.go:172] (0xc00238a840) (0xc0009f05a0) Stream removed, broadcasting: 5 Apr 15 13:44:30.867: INFO: Exec stderr: "" Apr 15 13:44:30.867: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:30.867: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:30.974070 6 log.go:172] (0xc0020b4840) (0xc003184820) Create stream I0415 13:44:30.974100 6 log.go:172] (0xc0020b4840) (0xc003184820) Stream added, broadcasting: 1 I0415 13:44:30.975546 6 log.go:172] (0xc0020b4840) Reply frame received for 1 I0415 13:44:30.975581 6 log.go:172] (0xc0020b4840) (0xc002ee1ae0) Create stream I0415 13:44:30.975593 6 log.go:172] (0xc0020b4840) (0xc002ee1ae0) Stream added, broadcasting: 3 I0415 13:44:30.976242 6 log.go:172] (0xc0020b4840) Reply frame received for 3 I0415 13:44:30.976267 6 log.go:172] (0xc0020b4840) (0xc0009f06e0) Create stream I0415 13:44:30.976276 6 log.go:172] (0xc0020b4840) (0xc0009f06e0) Stream added, broadcasting: 5 I0415 13:44:30.976959 6 log.go:172] (0xc0020b4840) Reply frame received for 5 I0415 13:44:31.045787 6 log.go:172] (0xc0020b4840) Data frame received for 5 I0415 13:44:31.045838 6 log.go:172] (0xc0009f06e0) (5) Data frame handling I0415 13:44:31.045875 6 log.go:172] (0xc0020b4840) Data frame received for 3 I0415 13:44:31.045915 6 log.go:172] (0xc002ee1ae0) (3) Data frame handling I0415 13:44:31.045950 6 log.go:172] (0xc002ee1ae0) (3) Data frame sent I0415 13:44:31.045967 6 log.go:172] (0xc0020b4840) Data frame received for 3 I0415 13:44:31.045979 6 log.go:172] (0xc002ee1ae0) (3) Data frame handling I0415 13:44:31.047733 6 log.go:172] (0xc0020b4840) Data frame received for 1 I0415 13:44:31.047812 6 log.go:172] (0xc003184820) (1) Data frame handling I0415 13:44:31.047882 6 log.go:172] (0xc003184820) (1) Data frame sent I0415 13:44:31.047955 6 log.go:172] (0xc0020b4840) (0xc003184820) Stream removed, broadcasting: 1 I0415 13:44:31.047996 6 log.go:172] (0xc0020b4840) Go away received I0415 13:44:31.048425 6 log.go:172] (0xc0020b4840) (0xc003184820) Stream removed, broadcasting: 1 I0415 13:44:31.048457 6 log.go:172] (0xc0020b4840) (0xc002ee1ae0) Stream removed, broadcasting: 3 I0415 13:44:31.048478 6 log.go:172] (0xc0020b4840) (0xc0009f06e0) Stream removed, broadcasting: 5 Apr 15 13:44:31.048: INFO: Exec stderr: "" Apr 15 13:44:31.048: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.048: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.079527 6 log.go:172] (0xc002f4b290) (0xc0009f0d20) Create stream I0415 13:44:31.079561 6 log.go:172] (0xc002f4b290) (0xc0009f0d20) Stream added, broadcasting: 1 I0415 13:44:31.088283 6 log.go:172] (0xc002f4b290) Reply frame received for 1 I0415 13:44:31.088336 6 log.go:172] (0xc002f4b290) (0xc001910320) Create stream I0415 13:44:31.088351 6 log.go:172] (0xc002f4b290) (0xc001910320) Stream added, broadcasting: 3 I0415 13:44:31.089812 6 log.go:172] (0xc002f4b290) Reply frame received for 3 I0415 13:44:31.089849 6 log.go:172] (0xc002f4b290) (0xc002ee1c20) Create stream I0415 13:44:31.089862 6 log.go:172] (0xc002f4b290) (0xc002ee1c20) Stream added, broadcasting: 5 I0415 13:44:31.092692 6 log.go:172] (0xc002f4b290) Reply frame received for 5 I0415 13:44:31.157282 6 log.go:172] (0xc002f4b290) Data frame received for 3 I0415 13:44:31.157309 6 log.go:172] (0xc001910320) (3) Data frame handling I0415 13:44:31.157324 6 log.go:172] (0xc001910320) (3) Data frame sent I0415 13:44:31.157330 6 log.go:172] (0xc002f4b290) Data frame received for 3 I0415 13:44:31.157336 6 log.go:172] (0xc001910320) (3) Data frame handling I0415 13:44:31.157486 6 log.go:172] (0xc002f4b290) Data frame received for 5 I0415 13:44:31.157526 6 log.go:172] (0xc002ee1c20) (5) Data frame handling I0415 13:44:31.159110 6 log.go:172] (0xc002f4b290) Data frame received for 1 I0415 13:44:31.159183 6 log.go:172] (0xc0009f0d20) (1) Data frame handling I0415 13:44:31.159227 6 log.go:172] (0xc0009f0d20) (1) Data frame sent I0415 13:44:31.159312 6 log.go:172] (0xc002f4b290) (0xc0009f0d20) Stream removed, broadcasting: 1 I0415 13:44:31.159360 6 log.go:172] (0xc002f4b290) Go away received I0415 13:44:31.159476 6 log.go:172] (0xc002f4b290) (0xc0009f0d20) Stream removed, broadcasting: 1 I0415 13:44:31.159508 6 log.go:172] (0xc002f4b290) (0xc001910320) Stream removed, broadcasting: 3 I0415 13:44:31.159533 6 log.go:172] (0xc002f4b290) (0xc002ee1c20) Stream removed, broadcasting: 5 Apr 15 13:44:31.159: INFO: Exec stderr: "" Apr 15 13:44:31.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.159: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.193684 6 log.go:172] (0xc0022ca2c0) (0xc001048820) Create stream I0415 13:44:31.193710 6 log.go:172] (0xc0022ca2c0) (0xc001048820) Stream added, broadcasting: 1 I0415 13:44:31.195961 6 log.go:172] (0xc0022ca2c0) Reply frame received for 1 I0415 13:44:31.196010 6 log.go:172] (0xc0022ca2c0) (0xc0019103c0) Create stream I0415 13:44:31.196028 6 log.go:172] (0xc0022ca2c0) (0xc0019103c0) Stream added, broadcasting: 3 I0415 13:44:31.197084 6 log.go:172] (0xc0022ca2c0) Reply frame received for 3 I0415 13:44:31.197235 6 log.go:172] (0xc0022ca2c0) (0xc001048960) Create stream I0415 13:44:31.197249 6 log.go:172] (0xc0022ca2c0) (0xc001048960) Stream added, broadcasting: 5 I0415 13:44:31.198469 6 log.go:172] (0xc0022ca2c0) Reply frame received for 5 I0415 13:44:31.253311 6 log.go:172] (0xc0022ca2c0) Data frame received for 3 I0415 13:44:31.253357 6 log.go:172] (0xc0019103c0) (3) Data frame handling I0415 13:44:31.253377 6 log.go:172] (0xc0019103c0) (3) Data frame sent I0415 13:44:31.253393 6 log.go:172] (0xc0022ca2c0) Data frame received for 3 I0415 13:44:31.253402 6 log.go:172] (0xc0019103c0) (3) Data frame handling I0415 13:44:31.253444 6 log.go:172] (0xc0022ca2c0) Data frame received for 5 I0415 13:44:31.253487 6 log.go:172] (0xc001048960) (5) Data frame handling I0415 13:44:31.254655 6 log.go:172] (0xc0022ca2c0) Data frame received for 1 I0415 13:44:31.254689 6 log.go:172] (0xc001048820) (1) Data frame handling I0415 13:44:31.254706 6 log.go:172] (0xc001048820) (1) Data frame sent I0415 13:44:31.254742 6 log.go:172] (0xc0022ca2c0) (0xc001048820) Stream removed, broadcasting: 1 I0415 13:44:31.254763 6 log.go:172] (0xc0022ca2c0) Go away received I0415 13:44:31.254864 6 log.go:172] (0xc0022ca2c0) (0xc001048820) Stream removed, broadcasting: 1 I0415 13:44:31.254885 6 log.go:172] (0xc0022ca2c0) (0xc0019103c0) Stream removed, broadcasting: 3 I0415 13:44:31.254903 6 log.go:172] (0xc0022ca2c0) (0xc001048960) Stream removed, broadcasting: 5 Apr 15 13:44:31.254: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 15 13:44:31.254: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.255: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.287281 6 log.go:172] (0xc00134d340) (0xc001910820) Create stream I0415 13:44:31.287320 6 log.go:172] (0xc00134d340) (0xc001910820) Stream added, broadcasting: 1 I0415 13:44:31.289502 6 log.go:172] (0xc00134d340) Reply frame received for 1 I0415 13:44:31.289539 6 log.go:172] (0xc00134d340) (0xc001048be0) Create stream I0415 13:44:31.289552 6 log.go:172] (0xc00134d340) (0xc001048be0) Stream added, broadcasting: 3 I0415 13:44:31.290484 6 log.go:172] (0xc00134d340) Reply frame received for 3 I0415 13:44:31.290537 6 log.go:172] (0xc00134d340) (0xc001048f00) Create stream I0415 13:44:31.290556 6 log.go:172] (0xc00134d340) (0xc001048f00) Stream added, broadcasting: 5 I0415 13:44:31.291428 6 log.go:172] (0xc00134d340) Reply frame received for 5 I0415 13:44:31.359568 6 log.go:172] (0xc00134d340) Data frame received for 5 I0415 13:44:31.359608 6 log.go:172] (0xc001048f00) (5) Data frame handling I0415 13:44:31.359639 6 log.go:172] (0xc00134d340) Data frame received for 3 I0415 13:44:31.359672 6 log.go:172] (0xc001048be0) (3) Data frame handling I0415 13:44:31.359701 6 log.go:172] (0xc001048be0) (3) Data frame sent I0415 13:44:31.359722 6 log.go:172] (0xc00134d340) Data frame received for 3 I0415 13:44:31.359734 6 log.go:172] (0xc001048be0) (3) Data frame handling I0415 13:44:31.361406 6 log.go:172] (0xc00134d340) Data frame received for 1 I0415 13:44:31.361477 6 log.go:172] (0xc001910820) (1) Data frame handling I0415 13:44:31.361506 6 log.go:172] (0xc001910820) (1) Data frame sent I0415 13:44:31.361526 6 log.go:172] (0xc00134d340) (0xc001910820) Stream removed, broadcasting: 1 I0415 13:44:31.361549 6 log.go:172] (0xc00134d340) Go away received I0415 13:44:31.361808 6 log.go:172] (0xc00134d340) (0xc001910820) Stream removed, broadcasting: 1 I0415 13:44:31.361830 6 log.go:172] (0xc00134d340) (0xc001048be0) Stream removed, broadcasting: 3 I0415 13:44:31.361844 6 log.go:172] (0xc00134d340) (0xc001048f00) Stream removed, broadcasting: 5 Apr 15 13:44:31.361: INFO: Exec stderr: "" Apr 15 13:44:31.361: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.361: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.391842 6 log.go:172] (0xc0022cb600) (0xc001049860) Create stream I0415 13:44:31.391883 6 log.go:172] (0xc0022cb600) (0xc001049860) Stream added, broadcasting: 1 I0415 13:44:31.394309 6 log.go:172] (0xc0022cb600) Reply frame received for 1 I0415 13:44:31.394333 6 log.go:172] (0xc0022cb600) (0xc001910960) Create stream I0415 13:44:31.394341 6 log.go:172] (0xc0022cb600) (0xc001910960) Stream added, broadcasting: 3 I0415 13:44:31.395027 6 log.go:172] (0xc0022cb600) Reply frame received for 3 I0415 13:44:31.395053 6 log.go:172] (0xc0022cb600) (0xc002ee1d60) Create stream I0415 13:44:31.395062 6 log.go:172] (0xc0022cb600) (0xc002ee1d60) Stream added, broadcasting: 5 I0415 13:44:31.395970 6 log.go:172] (0xc0022cb600) Reply frame received for 5 I0415 13:44:31.468037 6 log.go:172] (0xc0022cb600) Data frame received for 3 I0415 13:44:31.468080 6 log.go:172] (0xc001910960) (3) Data frame handling I0415 13:44:31.468094 6 log.go:172] (0xc001910960) (3) Data frame sent I0415 13:44:31.468104 6 log.go:172] (0xc0022cb600) Data frame received for 3 I0415 13:44:31.468108 6 log.go:172] (0xc001910960) (3) Data frame handling I0415 13:44:31.468127 6 log.go:172] (0xc0022cb600) Data frame received for 5 I0415 13:44:31.468142 6 log.go:172] (0xc002ee1d60) (5) Data frame handling I0415 13:44:31.469940 6 log.go:172] (0xc0022cb600) Data frame received for 1 I0415 13:44:31.469962 6 log.go:172] (0xc001049860) (1) Data frame handling I0415 13:44:31.469973 6 log.go:172] (0xc001049860) (1) Data frame sent I0415 13:44:31.470081 6 log.go:172] (0xc0022cb600) (0xc001049860) Stream removed, broadcasting: 1 I0415 13:44:31.470232 6 log.go:172] (0xc0022cb600) (0xc001049860) Stream removed, broadcasting: 1 I0415 13:44:31.470264 6 log.go:172] (0xc0022cb600) (0xc001910960) Stream removed, broadcasting: 3 I0415 13:44:31.470516 6 log.go:172] (0xc0022cb600) (0xc002ee1d60) Stream removed, broadcasting: 5 Apr 15 13:44:31.470: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 15 13:44:31.470: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.470: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.473338 6 log.go:172] (0xc0022cb600) Go away received I0415 13:44:31.499678 6 log.go:172] (0xc002ca44d0) (0xc001049e00) Create stream I0415 13:44:31.499700 6 log.go:172] (0xc002ca44d0) (0xc001049e00) Stream added, broadcasting: 1 I0415 13:44:31.502366 6 log.go:172] (0xc002ca44d0) Reply frame received for 1 I0415 13:44:31.502401 6 log.go:172] (0xc002ca44d0) (0xc001910a00) Create stream I0415 13:44:31.502414 6 log.go:172] (0xc002ca44d0) (0xc001910a00) Stream added, broadcasting: 3 I0415 13:44:31.503213 6 log.go:172] (0xc002ca44d0) Reply frame received for 3 I0415 13:44:31.503249 6 log.go:172] (0xc002ca44d0) (0xc0031848c0) Create stream I0415 13:44:31.503258 6 log.go:172] (0xc002ca44d0) (0xc0031848c0) Stream added, broadcasting: 5 I0415 13:44:31.504117 6 log.go:172] (0xc002ca44d0) Reply frame received for 5 I0415 13:44:31.561272 6 log.go:172] (0xc002ca44d0) Data frame received for 5 I0415 13:44:31.561332 6 log.go:172] (0xc0031848c0) (5) Data frame handling I0415 13:44:31.561362 6 log.go:172] (0xc002ca44d0) Data frame received for 3 I0415 13:44:31.561379 6 log.go:172] (0xc001910a00) (3) Data frame handling I0415 13:44:31.561390 6 log.go:172] (0xc001910a00) (3) Data frame sent I0415 13:44:31.561402 6 log.go:172] (0xc002ca44d0) Data frame received for 3 I0415 13:44:31.561414 6 log.go:172] (0xc001910a00) (3) Data frame handling I0415 13:44:31.563058 6 log.go:172] (0xc002ca44d0) Data frame received for 1 I0415 13:44:31.563085 6 log.go:172] (0xc001049e00) (1) Data frame handling I0415 13:44:31.563100 6 log.go:172] (0xc001049e00) (1) Data frame sent I0415 13:44:31.563116 6 log.go:172] (0xc002ca44d0) (0xc001049e00) Stream removed, broadcasting: 1 I0415 13:44:31.563211 6 log.go:172] (0xc002ca44d0) (0xc001049e00) Stream removed, broadcasting: 1 I0415 13:44:31.563231 6 log.go:172] (0xc002ca44d0) (0xc001910a00) Stream removed, broadcasting: 3 I0415 13:44:31.563361 6 log.go:172] (0xc002ca44d0) Go away received I0415 13:44:31.563441 6 log.go:172] (0xc002ca44d0) (0xc0031848c0) Stream removed, broadcasting: 5 Apr 15 13:44:31.563: INFO: Exec stderr: "" Apr 15 13:44:31.563: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.563: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.599363 6 log.go:172] (0xc002ca5550) (0xc001568640) Create stream I0415 13:44:31.599384 6 log.go:172] (0xc002ca5550) (0xc001568640) Stream added, broadcasting: 1 I0415 13:44:31.602038 6 log.go:172] (0xc002ca5550) Reply frame received for 1 I0415 13:44:31.602100 6 log.go:172] (0xc002ca5550) (0xc0009f0dc0) Create stream I0415 13:44:31.602118 6 log.go:172] (0xc002ca5550) (0xc0009f0dc0) Stream added, broadcasting: 3 I0415 13:44:31.603232 6 log.go:172] (0xc002ca5550) Reply frame received for 3 I0415 13:44:31.603272 6 log.go:172] (0xc002ca5550) (0xc0009f0f00) Create stream I0415 13:44:31.603288 6 log.go:172] (0xc002ca5550) (0xc0009f0f00) Stream added, broadcasting: 5 I0415 13:44:31.604419 6 log.go:172] (0xc002ca5550) Reply frame received for 5 I0415 13:44:31.673987 6 log.go:172] (0xc002ca5550) Data frame received for 5 I0415 13:44:31.674026 6 log.go:172] (0xc0009f0f00) (5) Data frame handling I0415 13:44:31.674079 6 log.go:172] (0xc002ca5550) Data frame received for 3 I0415 13:44:31.674120 6 log.go:172] (0xc0009f0dc0) (3) Data frame handling I0415 13:44:31.674147 6 log.go:172] (0xc0009f0dc0) (3) Data frame sent I0415 13:44:31.674188 6 log.go:172] (0xc002ca5550) Data frame received for 3 I0415 13:44:31.674203 6 log.go:172] (0xc0009f0dc0) (3) Data frame handling I0415 13:44:31.676010 6 log.go:172] (0xc002ca5550) Data frame received for 1 I0415 13:44:31.676038 6 log.go:172] (0xc001568640) (1) Data frame handling I0415 13:44:31.676050 6 log.go:172] (0xc001568640) (1) Data frame sent I0415 13:44:31.676065 6 log.go:172] (0xc002ca5550) (0xc001568640) Stream removed, broadcasting: 1 I0415 13:44:31.676149 6 log.go:172] (0xc002ca5550) Go away received I0415 13:44:31.676192 6 log.go:172] (0xc002ca5550) (0xc001568640) Stream removed, broadcasting: 1 I0415 13:44:31.676233 6 log.go:172] (0xc002ca5550) (0xc0009f0dc0) Stream removed, broadcasting: 3 I0415 13:44:31.676266 6 log.go:172] (0xc002ca5550) (0xc0009f0f00) Stream removed, broadcasting: 5 Apr 15 13:44:31.676: INFO: Exec stderr: "" Apr 15 13:44:31.676: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.676: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.712470 6 log.go:172] (0xc0027c4790) (0xc0009f1400) Create stream I0415 13:44:31.712502 6 log.go:172] (0xc0027c4790) (0xc0009f1400) Stream added, broadcasting: 1 I0415 13:44:31.715624 6 log.go:172] (0xc0027c4790) Reply frame received for 1 I0415 13:44:31.715669 6 log.go:172] (0xc0027c4790) (0xc0015686e0) Create stream I0415 13:44:31.715683 6 log.go:172] (0xc0027c4790) (0xc0015686e0) Stream added, broadcasting: 3 I0415 13:44:31.716497 6 log.go:172] (0xc0027c4790) Reply frame received for 3 I0415 13:44:31.716533 6 log.go:172] (0xc0027c4790) (0xc003184960) Create stream I0415 13:44:31.716547 6 log.go:172] (0xc0027c4790) (0xc003184960) Stream added, broadcasting: 5 I0415 13:44:31.717520 6 log.go:172] (0xc0027c4790) Reply frame received for 5 I0415 13:44:31.767783 6 log.go:172] (0xc0027c4790) Data frame received for 5 I0415 13:44:31.767804 6 log.go:172] (0xc003184960) (5) Data frame handling I0415 13:44:31.767826 6 log.go:172] (0xc0027c4790) Data frame received for 3 I0415 13:44:31.767832 6 log.go:172] (0xc0015686e0) (3) Data frame handling I0415 13:44:31.767838 6 log.go:172] (0xc0015686e0) (3) Data frame sent I0415 13:44:31.767843 6 log.go:172] (0xc0027c4790) Data frame received for 3 I0415 13:44:31.767861 6 log.go:172] (0xc0015686e0) (3) Data frame handling I0415 13:44:31.769748 6 log.go:172] (0xc0027c4790) Data frame received for 1 I0415 13:44:31.769790 6 log.go:172] (0xc0009f1400) (1) Data frame handling I0415 13:44:31.769824 6 log.go:172] (0xc0009f1400) (1) Data frame sent I0415 13:44:31.769848 6 log.go:172] (0xc0027c4790) (0xc0009f1400) Stream removed, broadcasting: 1 I0415 13:44:31.769872 6 log.go:172] (0xc0027c4790) Go away received I0415 13:44:31.770027 6 log.go:172] (0xc0027c4790) (0xc0009f1400) Stream removed, broadcasting: 1 I0415 13:44:31.770053 6 log.go:172] (0xc0027c4790) (0xc0015686e0) Stream removed, broadcasting: 3 I0415 13:44:31.770065 6 log.go:172] (0xc0027c4790) (0xc003184960) Stream removed, broadcasting: 5 Apr 15 13:44:31.770: INFO: Exec stderr: "" Apr 15 13:44:31.770: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1880 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:44:31.770: INFO: >>> kubeConfig: /root/.kube/config I0415 13:44:31.807260 6 log.go:172] (0xc0027c6580) (0xc003012140) Create stream I0415 13:44:31.807302 6 log.go:172] (0xc0027c6580) (0xc003012140) Stream added, broadcasting: 1 I0415 13:44:31.810664 6 log.go:172] (0xc0027c6580) Reply frame received for 1 I0415 13:44:31.810717 6 log.go:172] (0xc0027c6580) (0xc0009f14a0) Create stream I0415 13:44:31.810733 6 log.go:172] (0xc0027c6580) (0xc0009f14a0) Stream added, broadcasting: 3 I0415 13:44:31.811964 6 log.go:172] (0xc0027c6580) Reply frame received for 3 I0415 13:44:31.812009 6 log.go:172] (0xc0027c6580) (0xc001568780) Create stream I0415 13:44:31.812025 6 log.go:172] (0xc0027c6580) (0xc001568780) Stream added, broadcasting: 5 I0415 13:44:31.812885 6 log.go:172] (0xc0027c6580) Reply frame received for 5 I0415 13:44:31.887713 6 log.go:172] (0xc0027c6580) Data frame received for 3 I0415 13:44:31.887734 6 log.go:172] (0xc0009f14a0) (3) Data frame handling I0415 13:44:31.887741 6 log.go:172] (0xc0009f14a0) (3) Data frame sent I0415 13:44:31.887745 6 log.go:172] (0xc0027c6580) Data frame received for 3 I0415 13:44:31.887749 6 log.go:172] (0xc0009f14a0) (3) Data frame handling I0415 13:44:31.887763 6 log.go:172] (0xc0027c6580) Data frame received for 5 I0415 13:44:31.887769 6 log.go:172] (0xc001568780) (5) Data frame handling I0415 13:44:31.889335 6 log.go:172] (0xc0027c6580) Data frame received for 1 I0415 13:44:31.889359 6 log.go:172] (0xc003012140) (1) Data frame handling I0415 13:44:31.889382 6 log.go:172] (0xc003012140) (1) Data frame sent I0415 13:44:31.889396 6 log.go:172] (0xc0027c6580) (0xc003012140) Stream removed, broadcasting: 1 I0415 13:44:31.889411 6 log.go:172] (0xc0027c6580) Go away received I0415 13:44:31.889537 6 log.go:172] (0xc0027c6580) (0xc003012140) Stream removed, broadcasting: 1 I0415 13:44:31.889560 6 log.go:172] (0xc0027c6580) (0xc0009f14a0) Stream removed, broadcasting: 3 I0415 13:44:31.889567 6 log.go:172] (0xc0027c6580) (0xc001568780) Stream removed, broadcasting: 5 Apr 15 13:44:31.889: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:44:31.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1880" for this suite. Apr 15 13:45:15.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:45:15.991: INFO: namespace e2e-kubelet-etc-hosts-1880 deletion completed in 44.097954341s • [SLOW TEST:55.343 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:45:15.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-g485f in namespace proxy-9182 I0415 13:45:16.083977 6 runners.go:180] Created replication controller with name: proxy-service-g485f, namespace: proxy-9182, replica count: 1 I0415 13:45:17.134527 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 13:45:18.134759 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 13:45:19.135004 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 13:45:20.135233 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:21.135448 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:22.135691 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:23.135931 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:24.136183 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:25.136410 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:26.136631 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0415 13:45:27.136833 6 runners.go:180] proxy-service-g485f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 15 13:45:27.141: INFO: setup took 11.09743628s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 15 13:45:27.153: INFO: (0) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 12.454671ms) Apr 15 13:45:27.153: INFO: (0) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 12.47243ms) Apr 15 13:45:27.153: INFO: (0) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 12.667905ms) Apr 15 13:45:27.154: INFO: (0) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 12.636791ms) Apr 15 13:45:27.154: INFO: (0) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 12.500257ms) Apr 15 13:45:27.154: INFO: (0) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 12.718862ms) Apr 15 13:45:27.154: INFO: (0) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 13.209797ms) Apr 15 13:45:27.155: INFO: (0) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 13.762358ms) Apr 15 13:45:27.156: INFO: (0) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 14.852539ms) Apr 15 13:45:27.156: INFO: (0) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 15.266352ms) Apr 15 13:45:27.156: INFO: (0) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 15.653464ms) Apr 15 13:45:27.157: INFO: (0) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 15.95052ms) Apr 15 13:45:27.159: INFO: (0) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 3.638646ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.72275ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.698318ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.779738ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.691356ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.716126ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.716152ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 4.448683ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 4.532907ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 4.585288ms) Apr 15 13:45:27.164: INFO: (1) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 4.520783ms) Apr 15 13:45:27.165: INFO: (1) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.817549ms) Apr 15 13:45:27.165: INFO: (1) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 4.958141ms) Apr 15 13:45:27.168: INFO: (2) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.109798ms) Apr 15 13:45:27.168: INFO: (2) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.113904ms) Apr 15 13:45:27.168: INFO: (2) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.158091ms) Apr 15 13:45:27.168: INFO: (2) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.154499ms) Apr 15 13:45:27.168: INFO: (2) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 3.892616ms) Apr 15 13:45:27.169: INFO: (2) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.945297ms) Apr 15 13:45:27.169: INFO: (2) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 3.904015ms) Apr 15 13:45:27.169: INFO: (2) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.925758ms) Apr 15 13:45:27.169: INFO: (2) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.928372ms) Apr 15 13:45:27.169: INFO: (2) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 4.00421ms) Apr 15 13:45:27.170: INFO: (2) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 4.720474ms) Apr 15 13:45:27.170: INFO: (2) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 5.431072ms) Apr 15 13:45:27.170: INFO: (2) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 5.361617ms) Apr 15 13:45:27.170: INFO: (2) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 5.382503ms) Apr 15 13:45:27.170: INFO: (2) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 5.37223ms) Apr 15 13:45:27.174: INFO: (3) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.172705ms) Apr 15 13:45:27.174: INFO: (3) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.115ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 4.288605ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 4.365561ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 4.309872ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.384867ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.445632ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.47424ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 4.508262ms) Apr 15 13:45:27.175: INFO: (3) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test<... (200; 2.533168ms) Apr 15 13:45:27.179: INFO: (4) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 2.798936ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 3.884845ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 2.966684ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.081128ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.760974ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.233736ms) Apr 15 13:45:27.180: INFO: (4) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 3.522043ms) Apr 15 13:45:27.181: INFO: (4) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 3.870253ms) Apr 15 13:45:27.181: INFO: (4) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.81524ms) Apr 15 13:45:27.181: INFO: (4) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.789333ms) Apr 15 13:45:27.181: INFO: (4) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 3.706607ms) Apr 15 13:45:27.181: INFO: (4) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 3.713545ms) Apr 15 13:45:27.182: INFO: (4) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 3.973904ms) Apr 15 13:45:27.189: INFO: (5) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 6.955208ms) Apr 15 13:45:27.189: INFO: (5) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 7.461254ms) Apr 15 13:45:27.190: INFO: (5) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 8.494006ms) Apr 15 13:45:27.191: INFO: (5) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 9.124055ms) Apr 15 13:45:27.191: INFO: (5) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 9.21496ms) Apr 15 13:45:27.191: INFO: (5) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 10.127922ms) Apr 15 13:45:27.193: INFO: (5) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 10.996767ms) Apr 15 13:45:27.193: INFO: (5) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 11.07505ms) Apr 15 13:45:27.193: INFO: (5) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 11.159685ms) Apr 15 13:45:27.193: INFO: (5) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 11.299322ms) Apr 15 13:45:27.193: INFO: (5) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 11.251528ms) Apr 15 13:45:27.201: INFO: (6) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 7.507855ms) Apr 15 13:45:27.201: INFO: (6) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 7.457571ms) Apr 15 13:45:27.201: INFO: (6) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 9.707179ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 9.702919ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 9.690721ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 9.746255ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 9.70202ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 9.847127ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 9.745419ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 9.860966ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 9.989933ms) Apr 15 13:45:27.203: INFO: (6) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 9.991832ms) Apr 15 13:45:27.204: INFO: (6) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 10.772875ms) Apr 15 13:45:27.207: INFO: (7) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.221288ms) Apr 15 13:45:27.208: INFO: (7) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.920559ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test<... (200; 4.731898ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.730776ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 4.730297ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 4.737523ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.650769ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.679531ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 4.697339ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 5.094958ms) Apr 15 13:45:27.209: INFO: (7) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 5.11623ms) Apr 15 13:45:27.210: INFO: (7) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 5.285845ms) Apr 15 13:45:27.210: INFO: (7) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 5.41913ms) Apr 15 13:45:27.210: INFO: (7) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 5.818831ms) Apr 15 13:45:27.210: INFO: (7) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 5.805778ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.5908ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 4.528717ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.556369ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.580316ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 4.660786ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.648688ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 4.670442ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 4.64181ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 4.755338ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.74846ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 5.080789ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 5.388505ms) Apr 15 13:45:27.215: INFO: (8) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 5.362823ms) Apr 15 13:45:27.216: INFO: (8) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 5.494179ms) Apr 15 13:45:27.216: INFO: (8) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 5.604825ms) Apr 15 13:45:27.216: INFO: (8) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test<... (200; 3.109524ms) Apr 15 13:45:27.219: INFO: (9) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 3.363039ms) Apr 15 13:45:27.219: INFO: (9) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.483644ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.95579ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.866514ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.954314ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.921484ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.045094ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.955033ms) Apr 15 13:45:27.220: INFO: (9) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test<... (200; 1.720195ms) Apr 15 13:45:27.223: INFO: (10) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 1.780942ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.721864ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.68526ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.927587ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 4.016716ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.013072ms) Apr 15 13:45:27.225: INFO: (10) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test<... (200; 4.293897ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.413197ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 5.28128ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.904168ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 5.157447ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 5.38862ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 5.181624ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 5.343172ms) Apr 15 13:45:27.235: INFO: (11) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 5.727047ms) Apr 15 13:45:27.240: INFO: (12) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.730254ms) Apr 15 13:45:27.240: INFO: (12) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 5.703814ms) Apr 15 13:45:27.241: INFO: (12) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 5.670766ms) Apr 15 13:45:27.241: INFO: (12) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 5.703711ms) Apr 15 13:45:27.241: INFO: (12) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 5.748489ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 6.335779ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 6.516081ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 6.560217ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 6.570745ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 6.775004ms) Apr 15 13:45:27.242: INFO: (12) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 6.750174ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 2.90629ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.166347ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.056589ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.3087ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.298694ms) Apr 15 13:45:27.245: INFO: (13) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.404804ms) Apr 15 13:45:27.246: INFO: (13) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 3.628893ms) Apr 15 13:45:27.246: INFO: (13) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.668009ms) Apr 15 13:45:27.246: INFO: (13) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.936387ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 4.568796ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 4.598987ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 4.553173ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.677166ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 4.627797ms) Apr 15 13:45:27.247: INFO: (13) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 4.788072ms) Apr 15 13:45:27.249: INFO: (14) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 2.439284ms) Apr 15 13:45:27.249: INFO: (14) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 2.427799ms) Apr 15 13:45:27.251: INFO: (14) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 3.993486ms) Apr 15 13:45:27.251: INFO: (14) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.132666ms) Apr 15 13:45:27.251: INFO: (14) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 4.286433ms) Apr 15 13:45:27.251: INFO: (14) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.262773ms) Apr 15 13:45:27.251: INFO: (14) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.464055ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.784657ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 4.791071ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 4.941343ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 5.002703ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 5.10392ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 5.040617ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 5.078294ms) Apr 15 13:45:27.252: INFO: (14) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 5.231265ms) Apr 15 13:45:27.256: INFO: (15) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.674993ms) Apr 15 13:45:27.256: INFO: (15) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.683188ms) Apr 15 13:45:27.256: INFO: (15) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.796231ms) Apr 15 13:45:27.256: INFO: (15) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.951256ms) Apr 15 13:45:27.256: INFO: (15) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 4.115792ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.213294ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 4.287366ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.400039ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.745125ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.763352ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 4.788444ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 5.061746ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 5.124474ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 5.209845ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 5.133733ms) Apr 15 13:45:27.257: INFO: (15) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 3.498157ms) Apr 15 13:45:27.261: INFO: (16) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.606312ms) Apr 15 13:45:27.261: INFO: (16) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.564699ms) Apr 15 13:45:27.261: INFO: (16) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.646303ms) Apr 15 13:45:27.261: INFO: (16) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 3.754834ms) Apr 15 13:45:27.261: INFO: (16) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.812444ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname1/proxy/: tls baz (200; 4.02152ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 4.065335ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 4.224888ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname1/proxy/: foo (200; 4.147754ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 4.176161ms) Apr 15 13:45:27.262: INFO: (16) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 4.397452ms) Apr 15 13:45:27.264: INFO: (17) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 1.920901ms) Apr 15 13:45:27.265: INFO: (17) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 3.38371ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 3.706605ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:460/proxy/: tls baz (200; 3.692216ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 3.712345ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 3.651811ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.784431ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 3.686357ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 3.714607ms) Apr 15 13:45:27.266: INFO: (17) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: test (200; 3.671123ms) Apr 15 13:45:27.271: INFO: (18) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:1080/proxy/: ... (200; 4.023976ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.355689ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 4.395087ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 4.536872ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.617511ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.728207ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.801121ms) Apr 15 13:45:27.272: INFO: (18) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: ... (200; 3.039565ms) Apr 15 13:45:27.279: INFO: (19) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname1/proxy/: foo (200; 3.80846ms) Apr 15 13:45:27.279: INFO: (19) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.075989ms) Apr 15 13:45:27.279: INFO: (19) /api/v1/namespaces/proxy-9182/services/https:proxy-service-g485f:tlsportname2/proxy/: tls qux (200; 4.203449ms) Apr 15 13:45:27.279: INFO: (19) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:462/proxy/: tls qux (200; 4.153793ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/services/proxy-service-g485f:portname2/proxy/: bar (200; 4.183201ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:162/proxy/: bar (200; 4.2207ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt/proxy/: test (200; 4.414344ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/pods/http:proxy-service-g485f-q9dmt:160/proxy/: foo (200; 4.574182ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/services/http:proxy-service-g485f:portname2/proxy/: bar (200; 4.722837ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/pods/proxy-service-g485f-q9dmt:1080/proxy/: test<... (200; 4.781223ms) Apr 15 13:45:27.280: INFO: (19) /api/v1/namespaces/proxy-9182/pods/https:proxy-service-g485f-q9dmt:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:45:36.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 15 13:45:36.921: INFO: stderr: "" Apr 15 13:45:36.921: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 15 13:45:36.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 15 13:45:37.208: INFO: stderr: "" Apr 15 13:45:37.208: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 15 13:45:38.252: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:45:38.252: INFO: Found 0 / 1 Apr 15 13:45:39.214: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:45:39.214: INFO: Found 0 / 1 Apr 15 13:45:40.213: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:45:40.213: INFO: Found 1 / 1 Apr 15 13:45:40.213: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 15 13:45:40.217: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:45:40.217: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 15 13:45:40.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-wsfc4 --namespace=kubectl-7367' Apr 15 13:45:40.342: INFO: stderr: "" Apr 15 13:45:40.342: INFO: stdout: "Name: redis-master-wsfc4\nNamespace: kubectl-7367\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Wed, 15 Apr 2020 13:45:36 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.91\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://c520fd8f7b6d213dccf32776618b43fdd92e3680bb5c67edcb83f6c9b3b0fc0c\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 15 Apr 2020 13:45:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-96nn7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-96nn7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-96nn7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7367/redis-master-wsfc4 to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 15 13:45:40.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7367' Apr 15 13:45:40.454: INFO: stderr: "" Apr 15 13:45:40.454: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7367\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-wsfc4\n" Apr 15 13:45:40.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7367' Apr 15 13:45:40.555: INFO: stderr: "" Apr 15 13:45:40.555: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7367\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.215.33\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.91:6379\nSession Affinity: None\nEvents: \n" Apr 15 13:45:40.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 15 13:45:40.670: INFO: stderr: "" Apr 15 13:45:40.670: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 15 Apr 2020 13:45:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 15 Apr 2020 13:45:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 15 Apr 2020 13:45:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 15 Apr 2020 13:45:01 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 30d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 15 13:45:40.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7367' Apr 15 13:45:40.770: INFO: stderr: "" Apr 15 13:45:40.770: INFO: stdout: "Name: kubectl-7367\nLabels: e2e-framework=kubectl\n e2e-run=f992be92-f95b-4ac4-a0e2-2e77f59696c5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:45:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7367" for this suite. Apr 15 13:46:02.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:46:02.868: INFO: namespace kubectl-7367 deletion completed in 22.093859019s • [SLOW TEST:26.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:46:02.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 15 13:46:02.986: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5631" to be "success or failure" Apr 15 13:46:03.027: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 41.298817ms Apr 15 13:46:05.032: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046068389s Apr 15 13:46:07.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049899832s Apr 15 13:46:09.039: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053538485s STEP: Saw pod success Apr 15 13:46:09.039: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 15 13:46:09.042: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 15 13:46:09.061: INFO: Waiting for pod pod-host-path-test to disappear Apr 15 13:46:09.087: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:46:09.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5631" for this suite. Apr 15 13:46:15.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:46:15.178: INFO: namespace hostpath-5631 deletion completed in 6.087372237s • [SLOW TEST:12.308 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:46:15.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 15 13:46:21.266: INFO: Pod pod-hostip-0721b116-27ec-40db-a6a9-e31038c699b2 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:46:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9961" for this suite. Apr 15 13:46:43.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:46:43.410: INFO: namespace pods-9961 deletion completed in 22.140800059s • [SLOW TEST:28.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:46:43.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3678/secret-test-8d810efc-bbe6-440a-8767-2f460b55f43c STEP: Creating a pod to test consume secrets Apr 15 13:46:43.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456" in namespace "secrets-3678" to be "success or failure" Apr 15 13:46:43.491: INFO: Pod "pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456": Phase="Pending", Reason="", readiness=false. Elapsed: 4.604764ms Apr 15 13:46:45.496: INFO: Pod "pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009179558s Apr 15 13:46:47.499: INFO: Pod "pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012844608s STEP: Saw pod success Apr 15 13:46:47.499: INFO: Pod "pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456" satisfied condition "success or failure" Apr 15 13:46:47.502: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456 container env-test: STEP: delete the pod Apr 15 13:46:47.523: INFO: Waiting for pod pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456 to disappear Apr 15 13:46:47.527: INFO: Pod pod-configmaps-8d3526dd-e5ec-4e9c-9af0-afa6177d6456 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:46:47.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3678" for this suite. Apr 15 13:46:53.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:46:53.627: INFO: namespace secrets-3678 deletion completed in 6.096845528s • [SLOW TEST:10.217 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:46:53.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 15 13:46:53.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1207' Apr 15 13:46:56.169: INFO: stderr: "" Apr 15 13:46:56.169: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:46:56.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1207' Apr 15 13:46:56.266: INFO: stderr: "" Apr 15 13:46:56.266: INFO: stdout: "update-demo-nautilus-2g5k8 update-demo-nautilus-w7zfd " Apr 15 13:46:56.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2g5k8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1207' Apr 15 13:46:56.343: INFO: stderr: "" Apr 15 13:46:56.343: INFO: stdout: "" Apr 15 13:46:56.343: INFO: update-demo-nautilus-2g5k8 is created but not running Apr 15 13:47:01.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1207' Apr 15 13:47:01.442: INFO: stderr: "" Apr 15 13:47:01.442: INFO: stdout: "update-demo-nautilus-2g5k8 update-demo-nautilus-w7zfd " Apr 15 13:47:01.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2g5k8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1207' Apr 15 13:47:01.539: INFO: stderr: "" Apr 15 13:47:01.539: INFO: stdout: "true" Apr 15 13:47:01.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2g5k8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1207' Apr 15 13:47:01.641: INFO: stderr: "" Apr 15 13:47:01.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:47:01.641: INFO: validating pod update-demo-nautilus-2g5k8 Apr 15 13:47:01.645: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:47:01.645: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:47:01.645: INFO: update-demo-nautilus-2g5k8 is verified up and running Apr 15 13:47:01.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7zfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1207' Apr 15 13:47:01.739: INFO: stderr: "" Apr 15 13:47:01.739: INFO: stdout: "true" Apr 15 13:47:01.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7zfd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1207' Apr 15 13:47:01.829: INFO: stderr: "" Apr 15 13:47:01.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:47:01.829: INFO: validating pod update-demo-nautilus-w7zfd Apr 15 13:47:01.833: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:47:01.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:47:01.833: INFO: update-demo-nautilus-w7zfd is verified up and running STEP: using delete to clean up resources Apr 15 13:47:01.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1207' Apr 15 13:47:01.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 13:47:01.931: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 15 13:47:01.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1207' Apr 15 13:47:02.017: INFO: stderr: "No resources found.\n" Apr 15 13:47:02.017: INFO: stdout: "" Apr 15 13:47:02.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1207 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:47:02.118: INFO: stderr: "" Apr 15 13:47:02.118: INFO: stdout: "update-demo-nautilus-2g5k8\nupdate-demo-nautilus-w7zfd\n" Apr 15 13:47:02.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1207' Apr 15 13:47:02.722: INFO: stderr: "No resources found.\n" Apr 15 13:47:02.722: INFO: stdout: "" Apr 15 13:47:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1207 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:47:02.818: INFO: stderr: "" Apr 15 13:47:02.818: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:47:02.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1207" for this suite. Apr 15 13:47:08.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:47:08.923: INFO: namespace kubectl-1207 deletion completed in 6.101270492s • [SLOW TEST:15.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:47:08.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 15 13:47:13.524: INFO: Successfully updated pod "pod-update-090a5e8a-ca8a-4522-889e-e821e5517767" STEP: verifying the updated pod is in kubernetes Apr 15 13:47:13.562: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:47:13.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4724" for this suite. Apr 15 13:47:35.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:47:35.649: INFO: namespace pods-4724 deletion completed in 22.083002012s • [SLOW TEST:26.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:47:35.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 15 13:47:35.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7233' Apr 15 13:47:35.986: INFO: stderr: "" Apr 15 13:47:35.986: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:47:35.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7233' Apr 15 13:47:36.127: INFO: stderr: "" Apr 15 13:47:36.127: INFO: stdout: "update-demo-nautilus-j9d8v update-demo-nautilus-k5j6m " Apr 15 13:47:36.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9d8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:47:36.243: INFO: stderr: "" Apr 15 13:47:36.243: INFO: stdout: "" Apr 15 13:47:36.243: INFO: update-demo-nautilus-j9d8v is created but not running Apr 15 13:47:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7233' Apr 15 13:47:41.343: INFO: stderr: "" Apr 15 13:47:41.343: INFO: stdout: "update-demo-nautilus-j9d8v update-demo-nautilus-k5j6m " Apr 15 13:47:41.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9d8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:47:41.444: INFO: stderr: "" Apr 15 13:47:41.444: INFO: stdout: "true" Apr 15 13:47:41.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9d8v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:47:41.538: INFO: stderr: "" Apr 15 13:47:41.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:47:41.538: INFO: validating pod update-demo-nautilus-j9d8v Apr 15 13:47:41.543: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:47:41.543: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:47:41.543: INFO: update-demo-nautilus-j9d8v is verified up and running Apr 15 13:47:41.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5j6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:47:41.640: INFO: stderr: "" Apr 15 13:47:41.640: INFO: stdout: "true" Apr 15 13:47:41.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5j6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:47:41.736: INFO: stderr: "" Apr 15 13:47:41.736: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:47:41.736: INFO: validating pod update-demo-nautilus-k5j6m Apr 15 13:47:41.740: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:47:41.740: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:47:41.740: INFO: update-demo-nautilus-k5j6m is verified up and running STEP: rolling-update to new replication controller Apr 15 13:47:41.742: INFO: scanned /root for discovery docs: Apr 15 13:47:41.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7233' Apr 15 13:48:04.389: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 15 13:48:04.389: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:48:04.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7233' Apr 15 13:48:04.485: INFO: stderr: "" Apr 15 13:48:04.485: INFO: stdout: "update-demo-kitten-jjvr8 update-demo-kitten-rdz6h " Apr 15 13:48:04.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jjvr8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:48:04.588: INFO: stderr: "" Apr 15 13:48:04.588: INFO: stdout: "true" Apr 15 13:48:04.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jjvr8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:48:04.687: INFO: stderr: "" Apr 15 13:48:04.687: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 15 13:48:04.687: INFO: validating pod update-demo-kitten-jjvr8 Apr 15 13:48:04.691: INFO: got data: { "image": "kitten.jpg" } Apr 15 13:48:04.691: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 15 13:48:04.691: INFO: update-demo-kitten-jjvr8 is verified up and running Apr 15 13:48:04.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rdz6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:48:04.791: INFO: stderr: "" Apr 15 13:48:04.791: INFO: stdout: "true" Apr 15 13:48:04.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rdz6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7233' Apr 15 13:48:04.876: INFO: stderr: "" Apr 15 13:48:04.876: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 15 13:48:04.876: INFO: validating pod update-demo-kitten-rdz6h Apr 15 13:48:04.879: INFO: got data: { "image": "kitten.jpg" } Apr 15 13:48:04.879: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 15 13:48:04.879: INFO: update-demo-kitten-rdz6h is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:48:04.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7233" for this suite. Apr 15 13:48:26.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:48:27.030: INFO: namespace kubectl-7233 deletion completed in 22.148548598s • [SLOW TEST:51.381 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:48:27.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 15 13:48:27.074: INFO: namespace kubectl-7774 Apr 15 13:48:27.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7774' Apr 15 13:48:27.373: INFO: stderr: "" Apr 15 13:48:27.373: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 15 13:48:28.378: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:48:28.378: INFO: Found 0 / 1 Apr 15 13:48:29.378: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:48:29.378: INFO: Found 0 / 1 Apr 15 13:48:30.377: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:48:30.377: INFO: Found 0 / 1 Apr 15 13:48:31.378: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:48:31.378: INFO: Found 1 / 1 Apr 15 13:48:31.378: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 15 13:48:31.381: INFO: Selector matched 1 pods for map[app:redis] Apr 15 13:48:31.381: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 15 13:48:31.381: INFO: wait on redis-master startup in kubectl-7774 Apr 15 13:48:31.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-52sml redis-master --namespace=kubectl-7774' Apr 15 13:48:31.496: INFO: stderr: "" Apr 15 13:48:31.496: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Apr 13:48:30.096 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Apr 13:48:30.096 # Server started, Redis version 3.2.12\n1:M 15 Apr 13:48:30.096 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Apr 13:48:30.096 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 15 13:48:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7774' Apr 15 13:48:31.628: INFO: stderr: "" Apr 15 13:48:31.628: INFO: stdout: "service/rm2 exposed\n" Apr 15 13:48:31.642: INFO: Service rm2 in namespace kubectl-7774 found. STEP: exposing service Apr 15 13:48:33.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7774' Apr 15 13:48:33.773: INFO: stderr: "" Apr 15 13:48:33.773: INFO: stdout: "service/rm3 exposed\n" Apr 15 13:48:33.782: INFO: Service rm3 in namespace kubectl-7774 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:48:35.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7774" for this suite. Apr 15 13:48:57.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:48:57.904: INFO: namespace kubectl-7774 deletion completed in 22.110675761s • [SLOW TEST:30.873 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:48:57.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 13:48:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2692' Apr 15 13:48:58.075: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 15 13:48:58.075: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 15 13:48:58.089: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 15 13:48:58.099: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 15 13:48:58.109: INFO: scanned /root for discovery docs: Apr 15 13:48:58.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2692' Apr 15 13:49:13.956: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 15 13:49:13.956: INFO: stdout: "Created e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d\nScaling up e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 15 13:49:13.956: INFO: stdout: "Created e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d\nScaling up e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 15 13:49:13.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2692' Apr 15 13:49:14.050: INFO: stderr: "" Apr 15 13:49:14.050: INFO: stdout: "e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d-xf4d8 " Apr 15 13:49:14.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d-xf4d8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2692' Apr 15 13:49:14.136: INFO: stderr: "" Apr 15 13:49:14.136: INFO: stdout: "true" Apr 15 13:49:14.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d-xf4d8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2692' Apr 15 13:49:14.234: INFO: stderr: "" Apr 15 13:49:14.234: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 15 13:49:14.234: INFO: e2e-test-nginx-rc-bc4f4e2150bfd9efe11b94be6c75429d-xf4d8 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 15 13:49:14.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2692' Apr 15 13:49:14.332: INFO: stderr: "" Apr 15 13:49:14.333: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:49:14.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2692" for this suite. Apr 15 13:49:20.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:49:20.425: INFO: namespace kubectl-2692 deletion completed in 6.087979189s • [SLOW TEST:22.520 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:49:20.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 15 13:49:25.034: INFO: Successfully updated pod "labelsupdate5f15ee36-c683-4997-aec4-4f1b382cc3b9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:49:27.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6016" for this suite. Apr 15 13:49:47.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:49:47.138: INFO: namespace downward-api-6016 deletion completed in 20.08462951s • [SLOW TEST:26.712 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:49:47.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 15 13:49:47.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 15 13:49:47.312: INFO: stderr: "" Apr 15 13:49:47.312: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:49:47.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5629" for this suite. Apr 15 13:49:53.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:49:53.490: INFO: namespace kubectl-5629 deletion completed in 6.174756086s • [SLOW TEST:6.352 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:49:53.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:49:53.530: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 15 13:49:55.603: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:49:55.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2898" for this suite. Apr 15 13:50:01.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:50:01.917: INFO: namespace replication-controller-2898 deletion completed in 6.199469692s • [SLOW TEST:8.426 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:50:01.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-d741e7b4-173e-4bd1-a7f1-562327ccfe85 STEP: Creating a pod to test consume configMaps Apr 15 13:50:02.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5" in namespace "configmap-3784" to be "success or failure" Apr 15 13:50:02.044: INFO: Pod "pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.050065ms Apr 15 13:50:04.094: INFO: Pod "pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083351699s Apr 15 13:50:06.098: INFO: Pod "pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087539464s STEP: Saw pod success Apr 15 13:50:06.098: INFO: Pod "pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5" satisfied condition "success or failure" Apr 15 13:50:06.101: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5 container configmap-volume-test: STEP: delete the pod Apr 15 13:50:06.118: INFO: Waiting for pod pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5 to disappear Apr 15 13:50:06.122: INFO: Pod pod-configmaps-9e85baec-27c3-4fc7-b0e2-2b22a68c40b5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:50:06.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3784" for this suite. Apr 15 13:50:12.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:50:12.206: INFO: namespace configmap-3784 deletion completed in 6.08134837s • [SLOW TEST:10.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:50:12.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 15 13:50:12.269: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2755,SelfLink:/api/v1/namespaces/watch-2755/configmaps/e2e-watch-test-watch-closed,UID:03b03c50-0a09-4936-8a90-d6ec78f4e6e3,ResourceVersion:5568965,Generation:0,CreationTimestamp:2020-04-15 13:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 15 13:50:12.269: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2755,SelfLink:/api/v1/namespaces/watch-2755/configmaps/e2e-watch-test-watch-closed,UID:03b03c50-0a09-4936-8a90-d6ec78f4e6e3,ResourceVersion:5568966,Generation:0,CreationTimestamp:2020-04-15 13:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 15 13:50:12.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2755,SelfLink:/api/v1/namespaces/watch-2755/configmaps/e2e-watch-test-watch-closed,UID:03b03c50-0a09-4936-8a90-d6ec78f4e6e3,ResourceVersion:5568967,Generation:0,CreationTimestamp:2020-04-15 13:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 15 13:50:12.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2755,SelfLink:/api/v1/namespaces/watch-2755/configmaps/e2e-watch-test-watch-closed,UID:03b03c50-0a09-4936-8a90-d6ec78f4e6e3,ResourceVersion:5568968,Generation:0,CreationTimestamp:2020-04-15 13:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:50:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2755" for this suite. Apr 15 13:50:18.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:50:18.419: INFO: namespace watch-2755 deletion completed in 6.096304539s • [SLOW TEST:6.212 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:50:18.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 15 13:50:18.457: INFO: Waiting up to 5m0s for pod "downward-api-e158cc22-8fec-4c68-b197-da463147797f" in namespace "downward-api-2697" to be "success or failure" Apr 15 13:50:18.474: INFO: Pod "downward-api-e158cc22-8fec-4c68-b197-da463147797f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.506136ms Apr 15 13:50:20.479: INFO: Pod "downward-api-e158cc22-8fec-4c68-b197-da463147797f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021868389s Apr 15 13:50:22.483: INFO: Pod "downward-api-e158cc22-8fec-4c68-b197-da463147797f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026295055s STEP: Saw pod success Apr 15 13:50:22.483: INFO: Pod "downward-api-e158cc22-8fec-4c68-b197-da463147797f" satisfied condition "success or failure" Apr 15 13:50:22.486: INFO: Trying to get logs from node iruya-worker pod downward-api-e158cc22-8fec-4c68-b197-da463147797f container dapi-container: STEP: delete the pod Apr 15 13:50:22.524: INFO: Waiting for pod downward-api-e158cc22-8fec-4c68-b197-da463147797f to disappear Apr 15 13:50:22.545: INFO: Pod downward-api-e158cc22-8fec-4c68-b197-da463147797f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:50:22.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2697" for this suite. Apr 15 13:50:28.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:50:28.657: INFO: namespace downward-api-2697 deletion completed in 6.108176874s • [SLOW TEST:10.237 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:50:28.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 15 13:50:28.721: INFO: Waiting up to 5m0s for pod "var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c" in namespace "var-expansion-613" to be "success or failure" Apr 15 13:50:28.733: INFO: Pod "var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.059378ms Apr 15 13:50:30.737: INFO: Pod "var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016392674s Apr 15 13:50:32.741: INFO: Pod "var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020246758s STEP: Saw pod success Apr 15 13:50:32.741: INFO: Pod "var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c" satisfied condition "success or failure" Apr 15 13:50:32.743: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c container dapi-container: STEP: delete the pod Apr 15 13:50:32.762: INFO: Waiting for pod var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c to disappear Apr 15 13:50:32.766: INFO: Pod var-expansion-9d92da6c-2de7-46b5-afd8-de67df0f9d3c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:50:32.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-613" for this suite. Apr 15 13:50:38.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:50:38.907: INFO: namespace var-expansion-613 deletion completed in 6.138820477s • [SLOW TEST:10.250 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:50:38.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:50:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1535" for this suite. Apr 15 13:51:21.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:51:21.118: INFO: namespace kubelet-test-1535 deletion completed in 38.095564717s • [SLOW TEST:42.210 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:51:21.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 15 13:51:25.700: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7448 pod-service-account-83f708dc-1e1f-496c-b1ba-7d4187648170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 15 13:51:25.900: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7448 pod-service-account-83f708dc-1e1f-496c-b1ba-7d4187648170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 15 13:51:26.113: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7448 pod-service-account-83f708dc-1e1f-496c-b1ba-7d4187648170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:51:26.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7448" for this suite. Apr 15 13:51:32.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:51:32.451: INFO: namespace svcaccounts-7448 deletion completed in 6.107459501s • [SLOW TEST:11.333 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:51:32.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7b5d0d0c-56e1-42fe-a7de-6b91fa9e7962 STEP: Creating a pod to test consume secrets Apr 15 13:51:32.509: INFO: Waiting up to 5m0s for pod "pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36" in namespace "secrets-7761" to be "success or failure" Apr 15 13:51:32.526: INFO: Pod "pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 16.844668ms Apr 15 13:51:34.545: INFO: Pod "pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035225536s Apr 15 13:51:36.549: INFO: Pod "pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040135905s STEP: Saw pod success Apr 15 13:51:36.550: INFO: Pod "pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36" satisfied condition "success or failure" Apr 15 13:51:36.553: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36 container secret-volume-test: STEP: delete the pod Apr 15 13:51:36.586: INFO: Waiting for pod pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36 to disappear Apr 15 13:51:36.672: INFO: Pod pod-secrets-4b29efc8-8896-453a-80c5-eb97d97f5f36 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:51:36.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7761" for this suite. Apr 15 13:51:42.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:51:42.775: INFO: namespace secrets-7761 deletion completed in 6.098186371s • [SLOW TEST:10.324 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:51:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:51:42.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1059" for this suite. Apr 15 13:51:48.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:51:49.018: INFO: namespace kubelet-test-1059 deletion completed in 6.103572322s • [SLOW TEST:6.243 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:51:49.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:51:49.079: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 15 13:51:54.084: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 15 13:51:54.084: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 15 13:51:56.088: INFO: Creating deployment "test-rollover-deployment" Apr 15 13:51:56.098: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 15 13:51:58.105: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 15 13:51:58.110: INFO: Ensure that both replica sets have 1 created replica Apr 15 13:51:58.115: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 15 13:51:58.121: INFO: Updating deployment test-rollover-deployment Apr 15 13:51:58.121: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 15 13:52:00.131: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 15 13:52:00.135: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 15 13:52:00.138: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:00.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555518, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:02.146: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:02.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555521, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:04.145: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:04.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555521, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:06.144: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:06.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555521, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:08.147: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:08.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555521, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:10.146: INFO: all replica sets need to contain the pod-template-hash label Apr 15 13:52:10.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555521, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722555516, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 13:52:12.146: INFO: Apr 15 13:52:12.146: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 15 13:52:12.166: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9999,SelfLink:/apis/apps/v1/namespaces/deployment-9999/deployments/test-rollover-deployment,UID:603efc1f-398d-40cb-847f-d3c7c23d066c,ResourceVersion:5569420,Generation:2,CreationTimestamp:2020-04-15 13:51:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-15 13:51:56 +0000 UTC 2020-04-15 13:51:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-15 13:52:11 +0000 UTC 2020-04-15 13:51:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 15 13:52:12.170: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9999,SelfLink:/apis/apps/v1/namespaces/deployment-9999/replicasets/test-rollover-deployment-854595fc44,UID:7e1a1cc9-5290-49ac-ad20-7cfab5d85873,ResourceVersion:5569408,Generation:2,CreationTimestamp:2020-04-15 13:51:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 603efc1f-398d-40cb-847f-d3c7c23d066c 0xc0030345c7 0xc0030345c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 15 13:52:12.170: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 15 13:52:12.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9999,SelfLink:/apis/apps/v1/namespaces/deployment-9999/replicasets/test-rollover-controller,UID:47637ad9-747a-4d5d-8844-91063e203d7b,ResourceVersion:5569419,Generation:2,CreationTimestamp:2020-04-15 13:51:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 603efc1f-398d-40cb-847f-d3c7c23d066c 0xc0030344f7 0xc0030344f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:52:12.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9999,SelfLink:/apis/apps/v1/namespaces/deployment-9999/replicasets/test-rollover-deployment-9b8b997cf,UID:93a75151-0aa6-415f-aa10-f924dfff271b,ResourceVersion:5569372,Generation:2,CreationTimestamp:2020-04-15 13:51:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 603efc1f-398d-40cb-847f-d3c7c23d066c 0xc003034690 0xc003034691}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 13:52:12.174: INFO: Pod "test-rollover-deployment-854595fc44-nv2nx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-nv2nx,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9999,SelfLink:/api/v1/namespaces/deployment-9999/pods/test-rollover-deployment-854595fc44-nv2nx,UID:517a4964-9433-42b2-9a48-44402f56aee7,ResourceVersion:5569385,Generation:0,CreationTimestamp:2020-04-15 13:51:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 7e1a1cc9-5290-49ac-ad20-7cfab5d85873 0xc0030c56f7 0xc0030c56f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djdsk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djdsk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-djdsk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030c5770} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030c5790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:51:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:52:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:52:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 13:51:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.106,StartTime:2020-04-15 13:51:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-15 13:52:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://88498212a0c50cd1709d3d3b0bcedf248986e0369dbbe403bac83405b5402038}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:52:12.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9999" for this suite. Apr 15 13:52:18.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:52:18.272: INFO: namespace deployment-9999 deletion completed in 6.095134243s • [SLOW TEST:29.254 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:52:18.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 15 13:52:18.505: INFO: Waiting up to 5m0s for pod "pod-9cf936bb-c56e-4494-a6fe-096d5929dde0" in namespace "emptydir-3113" to be "success or failure" Apr 15 13:52:18.525: INFO: Pod "pod-9cf936bb-c56e-4494-a6fe-096d5929dde0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.23716ms Apr 15 13:52:20.566: INFO: Pod "pod-9cf936bb-c56e-4494-a6fe-096d5929dde0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06052837s Apr 15 13:52:22.581: INFO: Pod "pod-9cf936bb-c56e-4494-a6fe-096d5929dde0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07567179s STEP: Saw pod success Apr 15 13:52:22.581: INFO: Pod "pod-9cf936bb-c56e-4494-a6fe-096d5929dde0" satisfied condition "success or failure" Apr 15 13:52:22.583: INFO: Trying to get logs from node iruya-worker2 pod pod-9cf936bb-c56e-4494-a6fe-096d5929dde0 container test-container: STEP: delete the pod Apr 15 13:52:22.603: INFO: Waiting for pod pod-9cf936bb-c56e-4494-a6fe-096d5929dde0 to disappear Apr 15 13:52:22.607: INFO: Pod pod-9cf936bb-c56e-4494-a6fe-096d5929dde0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:52:22.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3113" for this suite. Apr 15 13:52:28.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:52:28.729: INFO: namespace emptydir-3113 deletion completed in 6.1198619s • [SLOW TEST:10.457 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:52:28.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 15 13:52:28.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5789' Apr 15 13:52:29.051: INFO: stderr: "" Apr 15 13:52:29.051: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:52:29.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:29.232: INFO: stderr: "" Apr 15 13:52:29.232: INFO: stdout: "update-demo-nautilus-8g7q8 update-demo-nautilus-8nls6 " Apr 15 13:52:29.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8g7q8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:29.347: INFO: stderr: "" Apr 15 13:52:29.347: INFO: stdout: "" Apr 15 13:52:29.347: INFO: update-demo-nautilus-8g7q8 is created but not running Apr 15 13:52:34.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:34.449: INFO: stderr: "" Apr 15 13:52:34.449: INFO: stdout: "update-demo-nautilus-8g7q8 update-demo-nautilus-8nls6 " Apr 15 13:52:34.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8g7q8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:34.545: INFO: stderr: "" Apr 15 13:52:34.545: INFO: stdout: "true" Apr 15 13:52:34.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8g7q8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:34.637: INFO: stderr: "" Apr 15 13:52:34.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:34.637: INFO: validating pod update-demo-nautilus-8g7q8 Apr 15 13:52:34.641: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:34.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:34.641: INFO: update-demo-nautilus-8g7q8 is verified up and running Apr 15 13:52:34.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:34.730: INFO: stderr: "" Apr 15 13:52:34.730: INFO: stdout: "true" Apr 15 13:52:34.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:34.817: INFO: stderr: "" Apr 15 13:52:34.817: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:34.817: INFO: validating pod update-demo-nautilus-8nls6 Apr 15 13:52:34.820: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:34.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:34.820: INFO: update-demo-nautilus-8nls6 is verified up and running STEP: scaling down the replication controller Apr 15 13:52:34.823: INFO: scanned /root for discovery docs: Apr 15 13:52:34.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5789' Apr 15 13:52:35.931: INFO: stderr: "" Apr 15 13:52:35.931: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:52:35.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:36.022: INFO: stderr: "" Apr 15 13:52:36.022: INFO: stdout: "update-demo-nautilus-8g7q8 update-demo-nautilus-8nls6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 15 13:52:41.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:41.117: INFO: stderr: "" Apr 15 13:52:41.117: INFO: stdout: "update-demo-nautilus-8g7q8 update-demo-nautilus-8nls6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 15 13:52:46.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:46.217: INFO: stderr: "" Apr 15 13:52:46.217: INFO: stdout: "update-demo-nautilus-8nls6 " Apr 15 13:52:46.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:46.306: INFO: stderr: "" Apr 15 13:52:46.306: INFO: stdout: "true" Apr 15 13:52:46.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:46.404: INFO: stderr: "" Apr 15 13:52:46.404: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:46.404: INFO: validating pod update-demo-nautilus-8nls6 Apr 15 13:52:46.408: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:46.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:46.408: INFO: update-demo-nautilus-8nls6 is verified up and running STEP: scaling up the replication controller Apr 15 13:52:46.410: INFO: scanned /root for discovery docs: Apr 15 13:52:46.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5789' Apr 15 13:52:47.530: INFO: stderr: "" Apr 15 13:52:47.530: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 15 13:52:47.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:47.644: INFO: stderr: "" Apr 15 13:52:47.644: INFO: stdout: "update-demo-nautilus-8nls6 update-demo-nautilus-dmlgc " Apr 15 13:52:47.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:47.731: INFO: stderr: "" Apr 15 13:52:47.731: INFO: stdout: "true" Apr 15 13:52:47.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:47.823: INFO: stderr: "" Apr 15 13:52:47.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:47.823: INFO: validating pod update-demo-nautilus-8nls6 Apr 15 13:52:47.888: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:47.888: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:47.888: INFO: update-demo-nautilus-8nls6 is verified up and running Apr 15 13:52:47.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dmlgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:47.982: INFO: stderr: "" Apr 15 13:52:47.983: INFO: stdout: "" Apr 15 13:52:47.983: INFO: update-demo-nautilus-dmlgc is created but not running Apr 15 13:52:52.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5789' Apr 15 13:52:53.091: INFO: stderr: "" Apr 15 13:52:53.091: INFO: stdout: "update-demo-nautilus-8nls6 update-demo-nautilus-dmlgc " Apr 15 13:52:53.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:53.200: INFO: stderr: "" Apr 15 13:52:53.200: INFO: stdout: "true" Apr 15 13:52:53.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nls6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:53.295: INFO: stderr: "" Apr 15 13:52:53.295: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:53.295: INFO: validating pod update-demo-nautilus-8nls6 Apr 15 13:52:53.299: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:53.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:53.299: INFO: update-demo-nautilus-8nls6 is verified up and running Apr 15 13:52:53.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dmlgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:53.401: INFO: stderr: "" Apr 15 13:52:53.401: INFO: stdout: "true" Apr 15 13:52:53.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dmlgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' Apr 15 13:52:53.494: INFO: stderr: "" Apr 15 13:52:53.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 15 13:52:53.494: INFO: validating pod update-demo-nautilus-dmlgc Apr 15 13:52:53.498: INFO: got data: { "image": "nautilus.jpg" } Apr 15 13:52:53.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 15 13:52:53.498: INFO: update-demo-nautilus-dmlgc is verified up and running STEP: using delete to clean up resources Apr 15 13:52:53.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5789' Apr 15 13:52:53.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 13:52:53.597: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 15 13:52:53.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5789' Apr 15 13:52:53.712: INFO: stderr: "No resources found.\n" Apr 15 13:52:53.712: INFO: stdout: "" Apr 15 13:52:53.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5789 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:52:53.802: INFO: stderr: "" Apr 15 13:52:53.802: INFO: stdout: "update-demo-nautilus-8nls6\nupdate-demo-nautilus-dmlgc\n" Apr 15 13:52:54.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5789' Apr 15 13:52:54.406: INFO: stderr: "No resources found.\n" Apr 15 13:52:54.406: INFO: stdout: "" Apr 15 13:52:54.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5789 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 15 13:52:54.505: INFO: stderr: "" Apr 15 13:52:54.505: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:52:54.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5789" for this suite. Apr 15 13:53:00.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:53:00.734: INFO: namespace kubectl-5789 deletion completed in 6.22546743s • [SLOW TEST:32.004 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:53:00.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 13:53:00.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7622' Apr 15 13:53:00.885: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 15 13:53:00.885: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 15 13:53:00.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7622' Apr 15 13:53:01.007: INFO: stderr: "" Apr 15 13:53:01.007: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:53:01.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7622" for this suite. Apr 15 13:53:23.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:53:23.129: INFO: namespace kubectl-7622 deletion completed in 22.108770025s • [SLOW TEST:22.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:53:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-bdeb033b-a3e2-4524-b71d-9a46ab64906e STEP: Creating a pod to test consume secrets Apr 15 13:53:23.224: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446" in namespace "projected-6899" to be "success or failure" Apr 15 13:53:23.234: INFO: Pod "pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446": Phase="Pending", Reason="", readiness=false. Elapsed: 9.315705ms Apr 15 13:53:25.277: INFO: Pod "pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052523088s Apr 15 13:53:27.282: INFO: Pod "pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057386529s STEP: Saw pod success Apr 15 13:53:27.282: INFO: Pod "pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446" satisfied condition "success or failure" Apr 15 13:53:27.285: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446 container secret-volume-test: STEP: delete the pod Apr 15 13:53:27.327: INFO: Waiting for pod pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446 to disappear Apr 15 13:53:27.335: INFO: Pod pod-projected-secrets-25413282-4bd5-4b22-a2a2-9777c9a71446 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:53:27.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6899" for this suite. Apr 15 13:53:33.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:53:33.424: INFO: namespace projected-6899 deletion completed in 6.085032925s • [SLOW TEST:10.293 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:53:33.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-298s STEP: Creating a pod to test atomic-volume-subpath Apr 15 13:53:33.518: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-298s" in namespace "subpath-5667" to be "success or failure" Apr 15 13:53:33.521: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.756473ms Apr 15 13:53:35.525: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007338308s Apr 15 13:53:37.529: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 4.01107178s Apr 15 13:53:39.534: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 6.015915191s Apr 15 13:53:41.546: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 8.028191718s Apr 15 13:53:43.550: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 10.032640215s Apr 15 13:53:45.555: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 12.037482533s Apr 15 13:53:47.559: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 14.041571017s Apr 15 13:53:49.564: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 16.04604163s Apr 15 13:53:51.568: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 18.050083024s Apr 15 13:53:53.572: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 20.054594911s Apr 15 13:53:55.577: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Running", Reason="", readiness=true. Elapsed: 22.058836324s Apr 15 13:53:57.580: INFO: Pod "pod-subpath-test-configmap-298s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062536282s STEP: Saw pod success Apr 15 13:53:57.580: INFO: Pod "pod-subpath-test-configmap-298s" satisfied condition "success or failure" Apr 15 13:53:57.583: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-298s container test-container-subpath-configmap-298s: STEP: delete the pod Apr 15 13:53:57.605: INFO: Waiting for pod pod-subpath-test-configmap-298s to disappear Apr 15 13:53:57.615: INFO: Pod pod-subpath-test-configmap-298s no longer exists STEP: Deleting pod pod-subpath-test-configmap-298s Apr 15 13:53:57.615: INFO: Deleting pod "pod-subpath-test-configmap-298s" in namespace "subpath-5667" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:53:57.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5667" for this suite. Apr 15 13:54:03.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:54:03.748: INFO: namespace subpath-5667 deletion completed in 6.12680468s • [SLOW TEST:30.324 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:54:03.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-906 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-906 to expose endpoints map[] Apr 15 13:54:03.892: INFO: Get endpoints failed (16.802235ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 15 13:54:04.895: INFO: successfully validated that service multi-endpoint-test in namespace services-906 exposes endpoints map[] (1.020516873s elapsed) STEP: Creating pod pod1 in namespace services-906 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-906 to expose endpoints map[pod1:[100]] Apr 15 13:54:07.932: INFO: successfully validated that service multi-endpoint-test in namespace services-906 exposes endpoints map[pod1:[100]] (3.026076963s elapsed) STEP: Creating pod pod2 in namespace services-906 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-906 to expose endpoints map[pod1:[100] pod2:[101]] Apr 15 13:54:11.051: INFO: successfully validated that service multi-endpoint-test in namespace services-906 exposes endpoints map[pod1:[100] pod2:[101]] (3.115782659s elapsed) STEP: Deleting pod pod1 in namespace services-906 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-906 to expose endpoints map[pod2:[101]] Apr 15 13:54:12.097: INFO: successfully validated that service multi-endpoint-test in namespace services-906 exposes endpoints map[pod2:[101]] (1.041672998s elapsed) STEP: Deleting pod pod2 in namespace services-906 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-906 to expose endpoints map[] Apr 15 13:54:13.125: INFO: successfully validated that service multi-endpoint-test in namespace services-906 exposes endpoints map[] (1.022770971s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:54:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-906" for this suite. Apr 15 13:54:35.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:54:35.295: INFO: namespace services-906 deletion completed in 22.091582826s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:54:35.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1514 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 15 13:54:35.449: INFO: Found 0 stateful pods, waiting for 3 Apr 15 13:54:45.454: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:54:45.454: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:54:45.454: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 15 13:54:55.453: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:54:55.453: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:54:55.453: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 15 13:54:55.478: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 15 13:55:05.535: INFO: Updating stateful set ss2 Apr 15 13:55:05.546: INFO: Waiting for Pod statefulset-1514/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 15 13:55:15.662: INFO: Found 2 stateful pods, waiting for 3 Apr 15 13:55:25.666: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:55:25.666: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 13:55:25.666: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 15 13:55:25.690: INFO: Updating stateful set ss2 Apr 15 13:55:25.703: INFO: Waiting for Pod statefulset-1514/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 15 13:55:35.729: INFO: Updating stateful set ss2 Apr 15 13:55:35.756: INFO: Waiting for StatefulSet statefulset-1514/ss2 to complete update Apr 15 13:55:35.756: INFO: Waiting for Pod statefulset-1514/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 15 13:55:45.764: INFO: Deleting all statefulset in ns statefulset-1514 Apr 15 13:55:45.768: INFO: Scaling statefulset ss2 to 0 Apr 15 13:56:05.787: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 13:56:05.790: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:56:05.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1514" for this suite. Apr 15 13:56:11.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:56:11.912: INFO: namespace statefulset-1514 deletion completed in 6.102075657s • [SLOW TEST:96.616 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:56:11.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 15 13:56:11.981: INFO: Waiting up to 5m0s for pod "pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7" in namespace "emptydir-541" to be "success or failure" Apr 15 13:56:11.985: INFO: Pod "pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043444ms Apr 15 13:56:13.988: INFO: Pod "pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007442705s Apr 15 13:56:15.993: INFO: Pod "pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011990448s STEP: Saw pod success Apr 15 13:56:15.993: INFO: Pod "pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7" satisfied condition "success or failure" Apr 15 13:56:15.995: INFO: Trying to get logs from node iruya-worker2 pod pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7 container test-container: STEP: delete the pod Apr 15 13:56:16.112: INFO: Waiting for pod pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7 to disappear Apr 15 13:56:16.117: INFO: Pod pod-a27f7e2b-5eac-4832-9646-5e860b7e8be7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:56:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-541" for this suite. Apr 15 13:56:22.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:56:22.224: INFO: namespace emptydir-541 deletion completed in 6.104524669s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:56:22.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-wqnv STEP: Creating a pod to test atomic-volume-subpath Apr 15 13:56:22.293: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wqnv" in namespace "subpath-4371" to be "success or failure" Apr 15 13:56:22.297: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.528311ms Apr 15 13:56:24.300: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006937396s Apr 15 13:56:26.305: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 4.011588401s Apr 15 13:56:28.309: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 6.015791192s Apr 15 13:56:30.313: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 8.019451431s Apr 15 13:56:32.317: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 10.023662353s Apr 15 13:56:34.321: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 12.027768847s Apr 15 13:56:36.326: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 14.032140491s Apr 15 13:56:38.330: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 16.036538436s Apr 15 13:56:40.334: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 18.040537241s Apr 15 13:56:42.339: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 20.045029807s Apr 15 13:56:44.343: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Running", Reason="", readiness=true. Elapsed: 22.049611095s Apr 15 13:56:46.347: INFO: Pod "pod-subpath-test-secret-wqnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053699499s STEP: Saw pod success Apr 15 13:56:46.347: INFO: Pod "pod-subpath-test-secret-wqnv" satisfied condition "success or failure" Apr 15 13:56:46.350: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-wqnv container test-container-subpath-secret-wqnv: STEP: delete the pod Apr 15 13:56:46.369: INFO: Waiting for pod pod-subpath-test-secret-wqnv to disappear Apr 15 13:56:46.374: INFO: Pod pod-subpath-test-secret-wqnv no longer exists STEP: Deleting pod pod-subpath-test-secret-wqnv Apr 15 13:56:46.374: INFO: Deleting pod "pod-subpath-test-secret-wqnv" in namespace "subpath-4371" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:56:46.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4371" for this suite. Apr 15 13:56:52.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:56:52.489: INFO: namespace subpath-4371 deletion completed in 6.11058916s • [SLOW TEST:30.264 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:56:52.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-8sw2 STEP: Creating a pod to test atomic-volume-subpath Apr 15 13:56:52.586: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8sw2" in namespace "subpath-1219" to be "success or failure" Apr 15 13:56:52.590: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.399425ms Apr 15 13:56:54.594: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007603473s Apr 15 13:56:56.599: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012105866s Apr 15 13:56:58.603: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 6.016653837s Apr 15 13:57:00.607: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 8.020375865s Apr 15 13:57:02.611: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 10.02421596s Apr 15 13:57:04.615: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 12.028436997s Apr 15 13:57:06.620: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 14.0330189s Apr 15 13:57:08.624: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 16.037345995s Apr 15 13:57:10.628: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 18.041641801s Apr 15 13:57:12.632: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 20.045515332s Apr 15 13:57:14.636: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Running", Reason="", readiness=true. Elapsed: 22.049809379s Apr 15 13:57:16.645: INFO: Pod "pod-subpath-test-downwardapi-8sw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058788159s STEP: Saw pod success Apr 15 13:57:16.645: INFO: Pod "pod-subpath-test-downwardapi-8sw2" satisfied condition "success or failure" Apr 15 13:57:16.662: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-8sw2 container test-container-subpath-downwardapi-8sw2: STEP: delete the pod Apr 15 13:57:16.682: INFO: Waiting for pod pod-subpath-test-downwardapi-8sw2 to disappear Apr 15 13:57:16.687: INFO: Pod pod-subpath-test-downwardapi-8sw2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8sw2 Apr 15 13:57:16.687: INFO: Deleting pod "pod-subpath-test-downwardapi-8sw2" in namespace "subpath-1219" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:57:16.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1219" for this suite. Apr 15 13:57:22.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:57:22.781: INFO: namespace subpath-1219 deletion completed in 6.088816283s • [SLOW TEST:30.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:57:22.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0415 13:58:02.990238 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 13:58:02.990: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:58:02.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9234" for this suite. Apr 15 13:58:13.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:58:13.086: INFO: namespace gc-9234 deletion completed in 10.092477453s • [SLOW TEST:50.304 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:58:13.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-50 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 13:58:13.137: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 15 13:58:33.242: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.122:8080/dial?request=hostName&protocol=udp&host=10.244.1.83&port=8081&tries=1'] Namespace:pod-network-test-50 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:58:33.242: INFO: >>> kubeConfig: /root/.kube/config I0415 13:58:33.276715 6 log.go:172] (0xc000e6e370) (0xc0019106e0) Create stream I0415 13:58:33.276744 6 log.go:172] (0xc000e6e370) (0xc0019106e0) Stream added, broadcasting: 1 I0415 13:58:33.279199 6 log.go:172] (0xc000e6e370) Reply frame received for 1 I0415 13:58:33.279242 6 log.go:172] (0xc000e6e370) (0xc0015c6460) Create stream I0415 13:58:33.279257 6 log.go:172] (0xc000e6e370) (0xc0015c6460) Stream added, broadcasting: 3 I0415 13:58:33.280242 6 log.go:172] (0xc000e6e370) Reply frame received for 3 I0415 13:58:33.280302 6 log.go:172] (0xc000e6e370) (0xc001910820) Create stream I0415 13:58:33.280316 6 log.go:172] (0xc000e6e370) (0xc001910820) Stream added, broadcasting: 5 I0415 13:58:33.281496 6 log.go:172] (0xc000e6e370) Reply frame received for 5 I0415 13:58:33.371790 6 log.go:172] (0xc000e6e370) Data frame received for 3 I0415 13:58:33.371824 6 log.go:172] (0xc0015c6460) (3) Data frame handling I0415 13:58:33.371844 6 log.go:172] (0xc0015c6460) (3) Data frame sent I0415 13:58:33.372733 6 log.go:172] (0xc000e6e370) Data frame received for 3 I0415 13:58:33.372763 6 log.go:172] (0xc000e6e370) Data frame received for 5 I0415 13:58:33.372794 6 log.go:172] (0xc001910820) (5) Data frame handling I0415 13:58:33.372833 6 log.go:172] (0xc0015c6460) (3) Data frame handling I0415 13:58:33.374886 6 log.go:172] (0xc000e6e370) Data frame received for 1 I0415 13:58:33.374910 6 log.go:172] (0xc0019106e0) (1) Data frame handling I0415 13:58:33.374932 6 log.go:172] (0xc0019106e0) (1) Data frame sent I0415 13:58:33.374955 6 log.go:172] (0xc000e6e370) (0xc0019106e0) Stream removed, broadcasting: 1 I0415 13:58:33.375023 6 log.go:172] (0xc000e6e370) (0xc0019106e0) Stream removed, broadcasting: 1 I0415 13:58:33.375034 6 log.go:172] (0xc000e6e370) (0xc0015c6460) Stream removed, broadcasting: 3 I0415 13:58:33.375039 6 log.go:172] (0xc000e6e370) (0xc001910820) Stream removed, broadcasting: 5 I0415 13:58:33.375072 6 log.go:172] (0xc000e6e370) Go away received Apr 15 13:58:33.375: INFO: Waiting for endpoints: map[] Apr 15 13:58:33.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.122:8080/dial?request=hostName&protocol=udp&host=10.244.2.121&port=8081&tries=1'] Namespace:pod-network-test-50 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 13:58:33.382: INFO: >>> kubeConfig: /root/.kube/config I0415 13:58:33.412368 6 log.go:172] (0xc000bea8f0) (0xc0015c6820) Create stream I0415 13:58:33.412400 6 log.go:172] (0xc000bea8f0) (0xc0015c6820) Stream added, broadcasting: 1 I0415 13:58:33.414629 6 log.go:172] (0xc000bea8f0) Reply frame received for 1 I0415 13:58:33.414657 6 log.go:172] (0xc000bea8f0) (0xc002801040) Create stream I0415 13:58:33.414667 6 log.go:172] (0xc000bea8f0) (0xc002801040) Stream added, broadcasting: 3 I0415 13:58:33.415305 6 log.go:172] (0xc000bea8f0) Reply frame received for 3 I0415 13:58:33.415336 6 log.go:172] (0xc000bea8f0) (0xc001d16320) Create stream I0415 13:58:33.415348 6 log.go:172] (0xc000bea8f0) (0xc001d16320) Stream added, broadcasting: 5 I0415 13:58:33.416125 6 log.go:172] (0xc000bea8f0) Reply frame received for 5 I0415 13:58:33.476471 6 log.go:172] (0xc000bea8f0) Data frame received for 3 I0415 13:58:33.476494 6 log.go:172] (0xc002801040) (3) Data frame handling I0415 13:58:33.476509 6 log.go:172] (0xc002801040) (3) Data frame sent I0415 13:58:33.476977 6 log.go:172] (0xc000bea8f0) Data frame received for 5 I0415 13:58:33.476996 6 log.go:172] (0xc001d16320) (5) Data frame handling I0415 13:58:33.477059 6 log.go:172] (0xc000bea8f0) Data frame received for 3 I0415 13:58:33.477076 6 log.go:172] (0xc002801040) (3) Data frame handling I0415 13:58:33.478675 6 log.go:172] (0xc000bea8f0) Data frame received for 1 I0415 13:58:33.478732 6 log.go:172] (0xc0015c6820) (1) Data frame handling I0415 13:58:33.478767 6 log.go:172] (0xc0015c6820) (1) Data frame sent I0415 13:58:33.478801 6 log.go:172] (0xc000bea8f0) (0xc0015c6820) Stream removed, broadcasting: 1 I0415 13:58:33.478888 6 log.go:172] (0xc000bea8f0) Go away received I0415 13:58:33.478926 6 log.go:172] (0xc000bea8f0) (0xc0015c6820) Stream removed, broadcasting: 1 I0415 13:58:33.478945 6 log.go:172] (0xc000bea8f0) (0xc002801040) Stream removed, broadcasting: 3 I0415 13:58:33.478953 6 log.go:172] (0xc000bea8f0) (0xc001d16320) Stream removed, broadcasting: 5 Apr 15 13:58:33.478: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:58:33.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-50" for this suite. Apr 15 13:58:55.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:58:55.613: INFO: namespace pod-network-test-50 deletion completed in 22.129957211s • [SLOW TEST:42.527 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:58:55.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-2d15f435-1160-4205-b71f-229fb72e99de STEP: Creating a pod to test consume secrets Apr 15 13:58:55.697: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd" in namespace "projected-3205" to be "success or failure" Apr 15 13:58:55.734: INFO: Pod "pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.899798ms Apr 15 13:58:57.739: INFO: Pod "pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041264444s Apr 15 13:58:59.743: INFO: Pod "pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045691184s STEP: Saw pod success Apr 15 13:58:59.743: INFO: Pod "pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd" satisfied condition "success or failure" Apr 15 13:58:59.746: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd container projected-secret-volume-test: STEP: delete the pod Apr 15 13:58:59.762: INFO: Waiting for pod pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd to disappear Apr 15 13:58:59.792: INFO: Pod pod-projected-secrets-c536e7cf-b2c8-456b-99f7-f0eff1d4d6bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:58:59.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3205" for this suite. Apr 15 13:59:05.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:59:05.893: INFO: namespace projected-3205 deletion completed in 6.098278196s • [SLOW TEST:10.279 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:59:05.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.126.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.126.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.126.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.126.178_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.126.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.126.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.126.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.126.178_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 13:59:12.063: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.066: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.070: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.073: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.110: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.113: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:12.137: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:17.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.169: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.188: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:17.213: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:22.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.154: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.157: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.179: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:22.206: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:27.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.153: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.174: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.183: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:27.221: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:32.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.179: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:32.207: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:37.146: INFO: Unable to read wheezy_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.149: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.151: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.153: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.170: INFO: Unable to read jessie_udp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.178: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local from pod dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a: the server could not find the requested resource (get pods dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a) Apr 15 13:59:37.195: INFO: Lookups using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a failed for: [wheezy_udp@dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service.dns-3697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_udp@dns-test-service.dns-3697.svc.cluster.local jessie_tcp@dns-test-service.dns-3697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3697.svc.cluster.local] Apr 15 13:59:42.203: INFO: DNS probes using dns-3697/dns-test-0c309e36-6b6f-40c1-b810-3c9d7353115a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:59:42.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3697" for this suite. Apr 15 13:59:48.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:59:48.797: INFO: namespace dns-3697 deletion completed in 6.261580158s • [SLOW TEST:42.903 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:59:48.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 13:59:48.841: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 13:59:49.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8818" for this suite. Apr 15 13:59:55.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 13:59:55.999: INFO: namespace custom-resource-definition-8818 deletion completed in 6.082373951s • [SLOW TEST:7.202 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 13:59:56.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 13:59:56.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932" in namespace "projected-9846" to be "success or failure" Apr 15 13:59:56.091: INFO: Pod "downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932": Phase="Pending", Reason="", readiness=false. Elapsed: 3.098135ms Apr 15 13:59:58.095: INFO: Pod "downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007364082s Apr 15 14:00:00.100: INFO: Pod "downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011620485s STEP: Saw pod success Apr 15 14:00:00.100: INFO: Pod "downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932" satisfied condition "success or failure" Apr 15 14:00:00.103: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932 container client-container: STEP: delete the pod Apr 15 14:00:00.123: INFO: Waiting for pod downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932 to disappear Apr 15 14:00:00.148: INFO: Pod downwardapi-volume-47cfbb31-c0b6-423a-9f45-ef1fbdada932 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:00:00.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9846" for this suite. Apr 15 14:00:06.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:06.253: INFO: namespace projected-9846 deletion completed in 6.101466544s • [SLOW TEST:10.253 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:00:06.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:00:12.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6059" for this suite. Apr 15 14:00:18.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:18.611: INFO: namespace namespaces-6059 deletion completed in 6.138919559s STEP: Destroying namespace "nsdeletetest-4840" for this suite. Apr 15 14:00:18.613: INFO: Namespace nsdeletetest-4840 was already deleted STEP: Destroying namespace "nsdeletetest-621" for this suite. Apr 15 14:00:24.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:24.735: INFO: namespace nsdeletetest-621 deletion completed in 6.121553294s • [SLOW TEST:18.481 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:00:24.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:00:24.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0" in namespace "downward-api-8383" to be "success or failure" Apr 15 14:00:24.830: INFO: Pod "downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.413112ms Apr 15 14:00:26.875: INFO: Pod "downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057022832s Apr 15 14:00:28.880: INFO: Pod "downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061188063s STEP: Saw pod success Apr 15 14:00:28.880: INFO: Pod "downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0" satisfied condition "success or failure" Apr 15 14:00:28.883: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0 container client-container: STEP: delete the pod Apr 15 14:00:28.902: INFO: Waiting for pod downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0 to disappear Apr 15 14:00:28.906: INFO: Pod downwardapi-volume-c5672ef7-c930-43da-9c72-beb22d8642b0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:00:28.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8383" for this suite. Apr 15 14:00:34.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:34.996: INFO: namespace downward-api-8383 deletion completed in 6.087106017s • [SLOW TEST:10.261 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:00:34.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 15 14:00:35.052: INFO: Waiting up to 5m0s for pod "pod-6e4ef214-2393-4377-9f86-eeddf17a8210" in namespace "emptydir-9150" to be "success or failure" Apr 15 14:00:35.062: INFO: Pod "pod-6e4ef214-2393-4377-9f86-eeddf17a8210": Phase="Pending", Reason="", readiness=false. Elapsed: 9.923274ms Apr 15 14:00:37.066: INFO: Pod "pod-6e4ef214-2393-4377-9f86-eeddf17a8210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014096539s Apr 15 14:00:39.071: INFO: Pod "pod-6e4ef214-2393-4377-9f86-eeddf17a8210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018391194s STEP: Saw pod success Apr 15 14:00:39.071: INFO: Pod "pod-6e4ef214-2393-4377-9f86-eeddf17a8210" satisfied condition "success or failure" Apr 15 14:00:39.074: INFO: Trying to get logs from node iruya-worker2 pod pod-6e4ef214-2393-4377-9f86-eeddf17a8210 container test-container: STEP: delete the pod Apr 15 14:00:39.105: INFO: Waiting for pod pod-6e4ef214-2393-4377-9f86-eeddf17a8210 to disappear Apr 15 14:00:39.116: INFO: Pod pod-6e4ef214-2393-4377-9f86-eeddf17a8210 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:00:39.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9150" for this suite. Apr 15 14:00:45.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:45.248: INFO: namespace emptydir-9150 deletion completed in 6.12958358s • [SLOW TEST:10.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:00:45.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 14:00:51.367: INFO: DNS probes using dns-6622/dns-test-7c51d792-608b-405f-8fbe-11bf3c01d258 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:00:51.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6622" for this suite. Apr 15 14:00:57.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:00:57.548: INFO: namespace dns-6622 deletion completed in 6.114323323s • [SLOW TEST:12.299 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:00:57.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-144069e2-8f5d-4819-aca7-e870f546e1b6 in namespace container-probe-7382 Apr 15 14:01:01.630: INFO: Started pod busybox-144069e2-8f5d-4819-aca7-e870f546e1b6 in namespace container-probe-7382 STEP: checking the pod's current state and verifying that restartCount is present Apr 15 14:01:01.633: INFO: Initial restart count of pod busybox-144069e2-8f5d-4819-aca7-e870f546e1b6 is 0 Apr 15 14:01:53.750: INFO: Restart count of pod container-probe-7382/busybox-144069e2-8f5d-4819-aca7-e870f546e1b6 is now 1 (52.116456081s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:01:53.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7382" for this suite. Apr 15 14:01:59.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:01:59.918: INFO: namespace container-probe-7382 deletion completed in 6.095109471s • [SLOW TEST:62.369 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:01:59.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-3e3855ad-f6a3-4c6c-b9a7-f96a62187b35 STEP: Creating secret with name s-test-opt-upd-85aae5d5-6f6f-4c23-893d-49a32838faec STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3e3855ad-f6a3-4c6c-b9a7-f96a62187b35 STEP: Updating secret s-test-opt-upd-85aae5d5-6f6f-4c23-893d-49a32838faec STEP: Creating secret with name s-test-opt-create-b930b8cc-6c59-46fe-90aa-675ceccd47c3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:03:18.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1987" for this suite. Apr 15 14:03:40.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:03:40.579: INFO: namespace secrets-1987 deletion completed in 22.100075514s • [SLOW TEST:100.660 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:03:40.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-333944d7-9ab8-4184-a738-16ef2e8abdb5 STEP: Creating a pod to test consume secrets Apr 15 14:03:40.771: INFO: Waiting up to 5m0s for pod "pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105" in namespace "secrets-4300" to be "success or failure" Apr 15 14:03:40.788: INFO: Pod "pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105": Phase="Pending", Reason="", readiness=false. Elapsed: 17.037762ms Apr 15 14:03:42.830: INFO: Pod "pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058884365s Apr 15 14:03:44.835: INFO: Pod "pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064296222s STEP: Saw pod success Apr 15 14:03:44.835: INFO: Pod "pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105" satisfied condition "success or failure" Apr 15 14:03:44.853: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105 container secret-volume-test: STEP: delete the pod Apr 15 14:03:44.868: INFO: Waiting for pod pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105 to disappear Apr 15 14:03:44.872: INFO: Pod pod-secrets-7a092ae8-5fab-42ba-8047-d82d39c6d105 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:03:44.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4300" for this suite. Apr 15 14:03:50.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:03:50.996: INFO: namespace secrets-4300 deletion completed in 6.120318056s STEP: Destroying namespace "secret-namespace-2795" for this suite. Apr 15 14:03:57.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:03:57.129: INFO: namespace secret-namespace-2795 deletion completed in 6.133448273s • [SLOW TEST:16.551 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:03:57.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cb8f2d4c-469e-4106-9c25-8f59edd0cfc5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cb8f2d4c-469e-4106-9c25-8f59edd0cfc5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:05:29.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1798" for this suite. Apr 15 14:05:51.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:05:51.850: INFO: namespace projected-1798 deletion completed in 22.150700773s • [SLOW TEST:114.720 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:05:51.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:05:51.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e" in namespace "projected-9903" to be "success or failure" Apr 15 14:05:51.913: INFO: Pod "downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.530099ms Apr 15 14:05:53.918: INFO: Pod "downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021813341s Apr 15 14:05:55.922: INFO: Pod "downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026238067s STEP: Saw pod success Apr 15 14:05:55.922: INFO: Pod "downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e" satisfied condition "success or failure" Apr 15 14:05:55.926: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e container client-container: STEP: delete the pod Apr 15 14:05:55.977: INFO: Waiting for pod downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e to disappear Apr 15 14:05:55.983: INFO: Pod downwardapi-volume-0802ae1c-294b-4a4b-9580-5566ff7d768e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:05:55.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9903" for this suite. Apr 15 14:06:01.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:06:02.076: INFO: namespace projected-9903 deletion completed in 6.090302551s • [SLOW TEST:10.226 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:06:02.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 15 14:06:06.156: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-cb0eeb03-abb8-4d28-a5cd-af9de8e2b765,GenerateName:,Namespace:events-3987,SelfLink:/api/v1/namespaces/events-3987/pods/send-events-cb0eeb03-abb8-4d28-a5cd-af9de8e2b765,UID:db541ec3-8b3d-4ede-b137-7753b2575f0c,ResourceVersion:5572321,Generation:0,CreationTimestamp:2020-04-15 14:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 124640195,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c2xmp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c2xmp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c2xmp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fd0150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fd0170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:06:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:06:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.128,StartTime:2020-04-15 14:06:02 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-15 14:06:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://8795dac688fe6bd9271fb82f40d095464b2b7c8fde3a08886ee5da2dbf42f454}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 15 14:06:08.161: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 15 14:06:10.167: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:06:10.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3987" for this suite. Apr 15 14:06:48.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:06:48.301: INFO: namespace events-3987 deletion completed in 38.104352557s • [SLOW TEST:46.224 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:06:48.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:06:48.390: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 15 14:06:48.410: INFO: Number of nodes with available pods: 0 Apr 15 14:06:48.410: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 15 14:06:48.446: INFO: Number of nodes with available pods: 0 Apr 15 14:06:48.446: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:49.451: INFO: Number of nodes with available pods: 0 Apr 15 14:06:49.451: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:50.451: INFO: Number of nodes with available pods: 0 Apr 15 14:06:50.451: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:51.451: INFO: Number of nodes with available pods: 1 Apr 15 14:06:51.451: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 15 14:06:51.492: INFO: Number of nodes with available pods: 1 Apr 15 14:06:51.492: INFO: Number of running nodes: 0, number of available pods: 1 Apr 15 14:06:52.496: INFO: Number of nodes with available pods: 0 Apr 15 14:06:52.496: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 15 14:06:52.504: INFO: Number of nodes with available pods: 0 Apr 15 14:06:52.504: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:53.508: INFO: Number of nodes with available pods: 0 Apr 15 14:06:53.508: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:54.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:54.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:55.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:55.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:56.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:56.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:57.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:57.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:58.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:58.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:06:59.509: INFO: Number of nodes with available pods: 0 Apr 15 14:06:59.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:00.509: INFO: Number of nodes with available pods: 0 Apr 15 14:07:00.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:01.509: INFO: Number of nodes with available pods: 0 Apr 15 14:07:01.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:02.509: INFO: Number of nodes with available pods: 0 Apr 15 14:07:02.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:03.508: INFO: Number of nodes with available pods: 0 Apr 15 14:07:03.509: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:04.508: INFO: Number of nodes with available pods: 0 Apr 15 14:07:04.508: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:07:05.509: INFO: Number of nodes with available pods: 1 Apr 15 14:07:05.509: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1192, will wait for the garbage collector to delete the pods Apr 15 14:07:05.574: INFO: Deleting DaemonSet.extensions daemon-set took: 6.882899ms Apr 15 14:07:05.874: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.28105ms Apr 15 14:07:12.177: INFO: Number of nodes with available pods: 0 Apr 15 14:07:12.177: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 14:07:12.179: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1192/daemonsets","resourceVersion":"5572519"},"items":null} Apr 15 14:07:12.182: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1192/pods","resourceVersion":"5572519"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:07:12.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1192" for this suite. Apr 15 14:07:18.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:07:18.347: INFO: namespace daemonsets-1192 deletion completed in 6.103584341s • [SLOW TEST:30.046 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:07:18.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 15 14:07:18.423: INFO: Waiting up to 5m0s for pod "var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038" in namespace "var-expansion-6895" to be "success or failure" Apr 15 14:07:18.427: INFO: Pod "var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038": Phase="Pending", Reason="", readiness=false. Elapsed: 3.569063ms Apr 15 14:07:20.431: INFO: Pod "var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007760322s Apr 15 14:07:22.435: INFO: Pod "var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011775022s STEP: Saw pod success Apr 15 14:07:22.435: INFO: Pod "var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038" satisfied condition "success or failure" Apr 15 14:07:22.453: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038 container dapi-container: STEP: delete the pod Apr 15 14:07:22.485: INFO: Waiting for pod var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038 to disappear Apr 15 14:07:22.504: INFO: Pod var-expansion-7abd76db-747b-47f8-8fde-c2bc59d2d038 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:07:22.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6895" for this suite. Apr 15 14:07:28.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:07:28.592: INFO: namespace var-expansion-6895 deletion completed in 6.084968877s • [SLOW TEST:10.245 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:07:28.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:07:28.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9" in namespace "downward-api-2576" to be "success or failure" Apr 15 14:07:28.652: INFO: Pod "downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.444646ms Apr 15 14:07:30.657: INFO: Pod "downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024467173s Apr 15 14:07:32.661: INFO: Pod "downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028770738s STEP: Saw pod success Apr 15 14:07:32.661: INFO: Pod "downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9" satisfied condition "success or failure" Apr 15 14:07:32.665: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9 container client-container: STEP: delete the pod Apr 15 14:07:32.696: INFO: Waiting for pod downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9 to disappear Apr 15 14:07:32.704: INFO: Pod downwardapi-volume-a080c7f1-7443-460b-b320-db4611d371d9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:07:32.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2576" for this suite. Apr 15 14:07:38.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:07:38.792: INFO: namespace downward-api-2576 deletion completed in 6.085170962s • [SLOW TEST:10.200 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:07:38.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b7378aca-c11f-44a2-903c-111de08b7442 STEP: Creating a pod to test consume configMaps Apr 15 14:07:38.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781" in namespace "configmap-3611" to be "success or failure" Apr 15 14:07:38.891: INFO: Pod "pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444546ms Apr 15 14:07:40.895: INFO: Pod "pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007372234s Apr 15 14:07:42.899: INFO: Pod "pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011345247s STEP: Saw pod success Apr 15 14:07:42.899: INFO: Pod "pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781" satisfied condition "success or failure" Apr 15 14:07:42.902: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781 container configmap-volume-test: STEP: delete the pod Apr 15 14:07:42.917: INFO: Waiting for pod pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781 to disappear Apr 15 14:07:42.934: INFO: Pod pod-configmaps-b5142f3a-c77e-471f-a761-ba2b0dc35781 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:07:42.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3611" for this suite. Apr 15 14:07:48.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:07:49.017: INFO: namespace configmap-3611 deletion completed in 6.079616223s • [SLOW TEST:10.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:07:49.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-d34942e7-c8c6-44ea-9206-b834b0aad96f STEP: Creating secret with name secret-projected-all-test-volume-3adaeaf9-e7e0-48c5-9b89-2b4addfe01fa STEP: Creating a pod to test Check all projections for projected volume plugin Apr 15 14:07:49.127: INFO: Waiting up to 5m0s for pod "projected-volume-696a5961-2525-4420-8525-4461cbffd7f4" in namespace "projected-2305" to be "success or failure" Apr 15 14:07:49.144: INFO: Pod "projected-volume-696a5961-2525-4420-8525-4461cbffd7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.069957ms Apr 15 14:07:51.147: INFO: Pod "projected-volume-696a5961-2525-4420-8525-4461cbffd7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020454491s Apr 15 14:07:53.151: INFO: Pod "projected-volume-696a5961-2525-4420-8525-4461cbffd7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024775579s STEP: Saw pod success Apr 15 14:07:53.151: INFO: Pod "projected-volume-696a5961-2525-4420-8525-4461cbffd7f4" satisfied condition "success or failure" Apr 15 14:07:53.154: INFO: Trying to get logs from node iruya-worker pod projected-volume-696a5961-2525-4420-8525-4461cbffd7f4 container projected-all-volume-test: STEP: delete the pod Apr 15 14:07:53.250: INFO: Waiting for pod projected-volume-696a5961-2525-4420-8525-4461cbffd7f4 to disappear Apr 15 14:07:53.253: INFO: Pod projected-volume-696a5961-2525-4420-8525-4461cbffd7f4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:07:53.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2305" for this suite. Apr 15 14:07:59.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:07:59.333: INFO: namespace projected-2305 deletion completed in 6.077480185s • [SLOW TEST:10.316 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:07:59.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-835bc2bd-468e-4444-99ac-2f90f2608f70 STEP: Creating a pod to test consume secrets Apr 15 14:07:59.400: INFO: Waiting up to 5m0s for pod "pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a" in namespace "secrets-9148" to be "success or failure" Apr 15 14:07:59.403: INFO: Pod "pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607622ms Apr 15 14:08:01.407: INFO: Pod "pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007552834s Apr 15 14:08:03.412: INFO: Pod "pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012326616s STEP: Saw pod success Apr 15 14:08:03.412: INFO: Pod "pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a" satisfied condition "success or failure" Apr 15 14:08:03.415: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a container secret-volume-test: STEP: delete the pod Apr 15 14:08:03.429: INFO: Waiting for pod pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a to disappear Apr 15 14:08:03.433: INFO: Pod pod-secrets-efc8d216-bec0-469a-841c-03e2d6ba0c2a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:08:03.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9148" for this suite. Apr 15 14:08:09.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:08:09.526: INFO: namespace secrets-9148 deletion completed in 6.089333059s • [SLOW TEST:10.192 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:08:09.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:08:35.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7778" for this suite. Apr 15 14:08:41.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:08:41.843: INFO: namespace namespaces-7778 deletion completed in 6.1026482s STEP: Destroying namespace "nsdeletetest-2932" for this suite. Apr 15 14:08:41.846: INFO: Namespace nsdeletetest-2932 was already deleted STEP: Destroying namespace "nsdeletetest-5149" for this suite. Apr 15 14:08:47.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:08:47.954: INFO: namespace nsdeletetest-5149 deletion completed in 6.108685315s • [SLOW TEST:38.427 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:08:47.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-76dac65e-fefb-44a1-b8bd-9d0f94282720 STEP: Creating a pod to test consume configMaps Apr 15 14:08:48.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80" in namespace "configmap-57" to be "success or failure" Apr 15 14:08:48.053: INFO: Pod "pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662133ms Apr 15 14:08:50.097: INFO: Pod "pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047623453s Apr 15 14:08:52.101: INFO: Pod "pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051512173s STEP: Saw pod success Apr 15 14:08:52.101: INFO: Pod "pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80" satisfied condition "success or failure" Apr 15 14:08:52.104: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80 container configmap-volume-test: STEP: delete the pod Apr 15 14:08:52.126: INFO: Waiting for pod pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80 to disappear Apr 15 14:08:52.129: INFO: Pod pod-configmaps-2c4c9e20-d504-4572-9e77-7651d60f2a80 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:08:52.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-57" for this suite. Apr 15 14:08:58.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:08:58.221: INFO: namespace configmap-57 deletion completed in 6.089586182s • [SLOW TEST:10.267 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:08:58.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fab3a24e-2627-49da-9643-d31a1932b9d5 STEP: Creating a pod to test consume configMaps Apr 15 14:08:58.289: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8" in namespace "projected-8743" to be "success or failure" Apr 15 14:08:58.312: INFO: Pod "pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.321781ms Apr 15 14:09:00.316: INFO: Pod "pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027281632s Apr 15 14:09:02.321: INFO: Pod "pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031634632s STEP: Saw pod success Apr 15 14:09:02.321: INFO: Pod "pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8" satisfied condition "success or failure" Apr 15 14:09:02.323: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8 container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:09:02.344: INFO: Waiting for pod pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8 to disappear Apr 15 14:09:02.358: INFO: Pod pod-projected-configmaps-f9541b72-107a-4779-8afe-e2a0b5a7ffe8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:09:02.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8743" for this suite. Apr 15 14:09:08.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:09:08.458: INFO: namespace projected-8743 deletion completed in 6.096927085s • [SLOW TEST:10.236 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:09:08.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6041 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6041 STEP: Deleting pre-stop pod Apr 15 14:09:21.596: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:09:21.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6041" for this suite. Apr 15 14:09:59.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:09:59.720: INFO: namespace prestop-6041 deletion completed in 38.107144575s • [SLOW TEST:51.262 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:09:59.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 15 14:09:59.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:09:59.806: INFO: Number of nodes with available pods: 0 Apr 15 14:09:59.806: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:10:00.811: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:10:00.815: INFO: Number of nodes with available pods: 0 Apr 15 14:10:00.815: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:10:01.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:10:01.875: INFO: Number of nodes with available pods: 0 Apr 15 14:10:01.875: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:10:02.890: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:10:02.913: INFO: Number of nodes with available pods: 0 Apr 15 14:10:02.913: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:10:03.812: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:10:03.815: INFO: Number of nodes with available pods: 2 Apr 15 14:10:03.815: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 15 14:10:03.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:10:03.877: INFO: Number of nodes with available pods: 2 Apr 15 14:10:03.877: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9855, will wait for the garbage collector to delete the pods Apr 15 14:10:05.076: INFO: Deleting DaemonSet.extensions daemon-set took: 35.361784ms Apr 15 14:10:05.377: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297801ms Apr 15 14:10:08.186: INFO: Number of nodes with available pods: 0 Apr 15 14:10:08.186: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 14:10:08.189: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9855/daemonsets","resourceVersion":"5573197"},"items":null} Apr 15 14:10:08.191: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9855/pods","resourceVersion":"5573197"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:10:08.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9855" for this suite. Apr 15 14:10:14.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:10:14.356: INFO: namespace daemonsets-9855 deletion completed in 6.095892915s • [SLOW TEST:14.635 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:10:14.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 15 14:10:14.420: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 15 14:10:14.433: INFO: Waiting for terminating namespaces to be deleted... Apr 15 14:10:14.435: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 15 14:10:14.439: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.439: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 14:10:14.439: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.439: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 14:10:14.439: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 15 14:10:14.445: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.445: INFO: Container coredns ready: true, restart count 0 Apr 15 14:10:14.445: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.445: INFO: Container coredns ready: true, restart count 0 Apr 15 14:10:14.445: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.445: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 14:10:14.445: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 15 14:10:14.445: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 15 14:10:14.502: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 15 14:10:14.502: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 15 14:10:14.502: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 15 14:10:14.502: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 15 14:10:14.502: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 15 14:10:14.502: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40.1606038b12b00f78], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8113/filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40.1606038b97076584], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40.1606038bcf13973b], Reason = [Created], Message = [Created container filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40.1606038bde9671b0], Reason = [Started], Message = [Started container filler-pod-1d4a3852-02c5-4f13-bbfd-3ec243383a40] STEP: Considering event: Type = [Normal], Name = [filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc.1606038b1203a5be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8113/filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc.1606038b601e247e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc.1606038bae22500e], Reason = [Created], Message = [Created container filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc] STEP: Considering event: Type = [Normal], Name = [filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc.1606038bc2ab5eaa], Reason = [Started], Message = [Started container filler-pod-c699bf89-b73e-4faa-8b93-53465e104fdc] STEP: Considering event: Type = [Warning], Name = [additional-pod.1606038c01e5ee69], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:10:19.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8113" for this suite. Apr 15 14:10:25.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:10:25.736: INFO: namespace sched-pred-8113 deletion completed in 6.111249142s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:10:25.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-2119 I0415 14:10:25.820758 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2119, replica count: 1 I0415 14:10:26.871235 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 14:10:27.871471 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 14:10:28.871664 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 14:10:29.871940 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 15 14:10:29.999: INFO: Created: latency-svc-66jvl Apr 15 14:10:30.016: INFO: Got endpoints: latency-svc-66jvl [43.905932ms] Apr 15 14:10:30.087: INFO: Created: latency-svc-4tj7x Apr 15 14:10:30.094: INFO: Got endpoints: latency-svc-4tj7x [78.180434ms] Apr 15 14:10:30.122: INFO: Created: latency-svc-gvxc5 Apr 15 14:10:30.136: INFO: Got endpoints: latency-svc-gvxc5 [119.991445ms] Apr 15 14:10:30.155: INFO: Created: latency-svc-vgszw Apr 15 14:10:30.166: INFO: Got endpoints: latency-svc-vgszw [150.383686ms] Apr 15 14:10:30.185: INFO: Created: latency-svc-7bws4 Apr 15 14:10:30.248: INFO: Got endpoints: latency-svc-7bws4 [232.188775ms] Apr 15 14:10:30.287: INFO: Created: latency-svc-k7sd2 Apr 15 14:10:30.304: INFO: Got endpoints: latency-svc-k7sd2 [288.359841ms] Apr 15 14:10:30.336: INFO: Created: latency-svc-x8g9d Apr 15 14:10:30.346: INFO: Got endpoints: latency-svc-x8g9d [330.441507ms] Apr 15 14:10:30.405: INFO: Created: latency-svc-z2k9n Apr 15 14:10:30.412: INFO: Got endpoints: latency-svc-z2k9n [396.264213ms] Apr 15 14:10:30.447: INFO: Created: latency-svc-rlgxq Apr 15 14:10:30.468: INFO: Got endpoints: latency-svc-rlgxq [451.489159ms] Apr 15 14:10:30.491: INFO: Created: latency-svc-gcbw6 Apr 15 14:10:30.504: INFO: Got endpoints: latency-svc-gcbw6 [487.448892ms] Apr 15 14:10:30.560: INFO: Created: latency-svc-wm4ws Apr 15 14:10:30.563: INFO: Got endpoints: latency-svc-wm4ws [547.171065ms] Apr 15 14:10:30.593: INFO: Created: latency-svc-7c94t Apr 15 14:10:30.612: INFO: Got endpoints: latency-svc-7c94t [595.973078ms] Apr 15 14:10:30.633: INFO: Created: latency-svc-rdbbj Apr 15 14:10:30.642: INFO: Got endpoints: latency-svc-rdbbj [626.078876ms] Apr 15 14:10:30.699: INFO: Created: latency-svc-h5kdv Apr 15 14:10:30.725: INFO: Got endpoints: latency-svc-h5kdv [708.344723ms] Apr 15 14:10:30.758: INFO: Created: latency-svc-zgqf4 Apr 15 14:10:30.769: INFO: Got endpoints: latency-svc-zgqf4 [752.660035ms] Apr 15 14:10:30.785: INFO: Created: latency-svc-vgz7b Apr 15 14:10:30.854: INFO: Got endpoints: latency-svc-vgz7b [837.991406ms] Apr 15 14:10:30.857: INFO: Created: latency-svc-zsmtc Apr 15 14:10:30.865: INFO: Got endpoints: latency-svc-zsmtc [771.346918ms] Apr 15 14:10:30.891: INFO: Created: latency-svc-5m7t2 Apr 15 14:10:30.902: INFO: Got endpoints: latency-svc-5m7t2 [766.230858ms] Apr 15 14:10:30.929: INFO: Created: latency-svc-n6rdd Apr 15 14:10:30.944: INFO: Got endpoints: latency-svc-n6rdd [777.880181ms] Apr 15 14:10:31.009: INFO: Created: latency-svc-mwskh Apr 15 14:10:31.016: INFO: Got endpoints: latency-svc-mwskh [767.880058ms] Apr 15 14:10:31.041: INFO: Created: latency-svc-7zf44 Apr 15 14:10:31.053: INFO: Got endpoints: latency-svc-7zf44 [748.164139ms] Apr 15 14:10:31.071: INFO: Created: latency-svc-ttpn4 Apr 15 14:10:31.083: INFO: Got endpoints: latency-svc-ttpn4 [736.402881ms] Apr 15 14:10:31.100: INFO: Created: latency-svc-jsgh9 Apr 15 14:10:31.140: INFO: Got endpoints: latency-svc-jsgh9 [728.097735ms] Apr 15 14:10:31.157: INFO: Created: latency-svc-9jqzd Apr 15 14:10:31.168: INFO: Got endpoints: latency-svc-9jqzd [700.022134ms] Apr 15 14:10:31.186: INFO: Created: latency-svc-fhsfl Apr 15 14:10:31.198: INFO: Got endpoints: latency-svc-fhsfl [694.22236ms] Apr 15 14:10:31.218: INFO: Created: latency-svc-qgcxq Apr 15 14:10:31.228: INFO: Got endpoints: latency-svc-qgcxq [664.913291ms] Apr 15 14:10:31.281: INFO: Created: latency-svc-f2xhf Apr 15 14:10:31.301: INFO: Got endpoints: latency-svc-f2xhf [688.537161ms] Apr 15 14:10:31.329: INFO: Created: latency-svc-vzjnv Apr 15 14:10:31.343: INFO: Got endpoints: latency-svc-vzjnv [701.149989ms] Apr 15 14:10:31.366: INFO: Created: latency-svc-qfcv4 Apr 15 14:10:31.422: INFO: Got endpoints: latency-svc-qfcv4 [697.536579ms] Apr 15 14:10:31.443: INFO: Created: latency-svc-bzz2p Apr 15 14:10:31.455: INFO: Got endpoints: latency-svc-bzz2p [685.596708ms] Apr 15 14:10:31.484: INFO: Created: latency-svc-8pl5z Apr 15 14:10:31.497: INFO: Got endpoints: latency-svc-8pl5z [643.054284ms] Apr 15 14:10:31.520: INFO: Created: latency-svc-rn5z5 Apr 15 14:10:31.554: INFO: Got endpoints: latency-svc-rn5z5 [688.563614ms] Apr 15 14:10:31.576: INFO: Created: latency-svc-gxll4 Apr 15 14:10:31.587: INFO: Got endpoints: latency-svc-gxll4 [685.205873ms] Apr 15 14:10:31.607: INFO: Created: latency-svc-f9k2z Apr 15 14:10:31.618: INFO: Got endpoints: latency-svc-f9k2z [673.567755ms] Apr 15 14:10:31.641: INFO: Created: latency-svc-4lrzl Apr 15 14:10:31.704: INFO: Got endpoints: latency-svc-4lrzl [687.561411ms] Apr 15 14:10:31.706: INFO: Created: latency-svc-n6cl6 Apr 15 14:10:31.714: INFO: Got endpoints: latency-svc-n6cl6 [661.684266ms] Apr 15 14:10:31.737: INFO: Created: latency-svc-t4p8t Apr 15 14:10:31.751: INFO: Got endpoints: latency-svc-t4p8t [667.606352ms] Apr 15 14:10:31.768: INFO: Created: latency-svc-stsxb Apr 15 14:10:31.781: INFO: Got endpoints: latency-svc-stsxb [640.559912ms] Apr 15 14:10:31.836: INFO: Created: latency-svc-4wrzq Apr 15 14:10:31.839: INFO: Got endpoints: latency-svc-4wrzq [670.9205ms] Apr 15 14:10:31.874: INFO: Created: latency-svc-xf2dw Apr 15 14:10:31.890: INFO: Got endpoints: latency-svc-xf2dw [692.474062ms] Apr 15 14:10:31.910: INFO: Created: latency-svc-c5jlg Apr 15 14:10:31.926: INFO: Got endpoints: latency-svc-c5jlg [697.55388ms] Apr 15 14:10:31.979: INFO: Created: latency-svc-6jcbt Apr 15 14:10:31.996: INFO: Got endpoints: latency-svc-6jcbt [695.34262ms] Apr 15 14:10:32.045: INFO: Created: latency-svc-j2gld Apr 15 14:10:32.058: INFO: Got endpoints: latency-svc-j2gld [715.064821ms] Apr 15 14:10:32.079: INFO: Created: latency-svc-7z54w Apr 15 14:10:32.158: INFO: Got endpoints: latency-svc-7z54w [736.157019ms] Apr 15 14:10:32.161: INFO: Created: latency-svc-krhnr Apr 15 14:10:32.167: INFO: Got endpoints: latency-svc-krhnr [712.504251ms] Apr 15 14:10:32.200: INFO: Created: latency-svc-llrpc Apr 15 14:10:32.216: INFO: Got endpoints: latency-svc-llrpc [718.174833ms] Apr 15 14:10:32.238: INFO: Created: latency-svc-x7q5n Apr 15 14:10:32.246: INFO: Got endpoints: latency-svc-x7q5n [691.566275ms] Apr 15 14:10:32.339: INFO: Created: latency-svc-6b4vh Apr 15 14:10:32.343: INFO: Got endpoints: latency-svc-6b4vh [755.835575ms] Apr 15 14:10:32.410: INFO: Created: latency-svc-24mpv Apr 15 14:10:32.427: INFO: Got endpoints: latency-svc-24mpv [808.927579ms] Apr 15 14:10:32.477: INFO: Created: latency-svc-db6bx Apr 15 14:10:32.504: INFO: Got endpoints: latency-svc-db6bx [800.028112ms] Apr 15 14:10:32.534: INFO: Created: latency-svc-n5jmp Apr 15 14:10:32.553: INFO: Got endpoints: latency-svc-n5jmp [838.830545ms] Apr 15 14:10:32.608: INFO: Created: latency-svc-2df4v Apr 15 14:10:32.611: INFO: Got endpoints: latency-svc-2df4v [860.617347ms] Apr 15 14:10:32.633: INFO: Created: latency-svc-m4st7 Apr 15 14:10:32.644: INFO: Got endpoints: latency-svc-m4st7 [862.370472ms] Apr 15 14:10:32.663: INFO: Created: latency-svc-8cfxf Apr 15 14:10:32.674: INFO: Got endpoints: latency-svc-8cfxf [835.611039ms] Apr 15 14:10:32.696: INFO: Created: latency-svc-4nz24 Apr 15 14:10:32.751: INFO: Got endpoints: latency-svc-4nz24 [860.907965ms] Apr 15 14:10:32.754: INFO: Created: latency-svc-9dstx Apr 15 14:10:32.759: INFO: Got endpoints: latency-svc-9dstx [832.55041ms] Apr 15 14:10:32.790: INFO: Created: latency-svc-qc4sq Apr 15 14:10:32.801: INFO: Got endpoints: latency-svc-qc4sq [804.935057ms] Apr 15 14:10:32.825: INFO: Created: latency-svc-q8pdr Apr 15 14:10:32.848: INFO: Got endpoints: latency-svc-q8pdr [789.745571ms] Apr 15 14:10:32.907: INFO: Created: latency-svc-pcwmm Apr 15 14:10:32.911: INFO: Got endpoints: latency-svc-pcwmm [752.128427ms] Apr 15 14:10:32.936: INFO: Created: latency-svc-xwt2j Apr 15 14:10:32.952: INFO: Got endpoints: latency-svc-xwt2j [784.39966ms] Apr 15 14:10:32.986: INFO: Created: latency-svc-rl4t8 Apr 15 14:10:33.000: INFO: Got endpoints: latency-svc-rl4t8 [784.136184ms] Apr 15 14:10:33.075: INFO: Created: latency-svc-jcshv Apr 15 14:10:33.078: INFO: Got endpoints: latency-svc-jcshv [832.260064ms] Apr 15 14:10:33.104: INFO: Created: latency-svc-65q7t Apr 15 14:10:33.120: INFO: Got endpoints: latency-svc-65q7t [776.858631ms] Apr 15 14:10:33.140: INFO: Created: latency-svc-sw4k6 Apr 15 14:10:33.151: INFO: Got endpoints: latency-svc-sw4k6 [724.273677ms] Apr 15 14:10:33.172: INFO: Created: latency-svc-ctqd2 Apr 15 14:10:33.207: INFO: Got endpoints: latency-svc-ctqd2 [703.016026ms] Apr 15 14:10:33.220: INFO: Created: latency-svc-gmgrm Apr 15 14:10:33.239: INFO: Got endpoints: latency-svc-gmgrm [685.525457ms] Apr 15 14:10:33.272: INFO: Created: latency-svc-ffxgh Apr 15 14:10:33.302: INFO: Got endpoints: latency-svc-ffxgh [691.06155ms] Apr 15 14:10:33.345: INFO: Created: latency-svc-5t9dd Apr 15 14:10:33.348: INFO: Got endpoints: latency-svc-5t9dd [704.423268ms] Apr 15 14:10:33.370: INFO: Created: latency-svc-vdqrf Apr 15 14:10:33.387: INFO: Got endpoints: latency-svc-vdqrf [712.146838ms] Apr 15 14:10:33.413: INFO: Created: latency-svc-h56lx Apr 15 14:10:33.430: INFO: Got endpoints: latency-svc-h56lx [679.108411ms] Apr 15 14:10:33.495: INFO: Created: latency-svc-xv2qw Apr 15 14:10:33.499: INFO: Got endpoints: latency-svc-xv2qw [740.600488ms] Apr 15 14:10:33.524: INFO: Created: latency-svc-4p25h Apr 15 14:10:33.537: INFO: Got endpoints: latency-svc-4p25h [736.113848ms] Apr 15 14:10:33.556: INFO: Created: latency-svc-5lmtg Apr 15 14:10:33.567: INFO: Got endpoints: latency-svc-5lmtg [719.147504ms] Apr 15 14:10:33.587: INFO: Created: latency-svc-9brzr Apr 15 14:10:33.626: INFO: Got endpoints: latency-svc-9brzr [715.103022ms] Apr 15 14:10:33.640: INFO: Created: latency-svc-hpthm Apr 15 14:10:33.652: INFO: Got endpoints: latency-svc-hpthm [700.496229ms] Apr 15 14:10:33.674: INFO: Created: latency-svc-w294w Apr 15 14:10:33.689: INFO: Got endpoints: latency-svc-w294w [689.145911ms] Apr 15 14:10:33.711: INFO: Created: latency-svc-7qpk8 Apr 15 14:10:33.725: INFO: Got endpoints: latency-svc-7qpk8 [647.455043ms] Apr 15 14:10:33.770: INFO: Created: latency-svc-55fn6 Apr 15 14:10:33.773: INFO: Got endpoints: latency-svc-55fn6 [652.967759ms] Apr 15 14:10:33.821: INFO: Created: latency-svc-kfqnd Apr 15 14:10:33.846: INFO: Got endpoints: latency-svc-kfqnd [694.38073ms] Apr 15 14:10:33.913: INFO: Created: latency-svc-pxnvf Apr 15 14:10:33.915: INFO: Got endpoints: latency-svc-pxnvf [708.31136ms] Apr 15 14:10:33.938: INFO: Created: latency-svc-6vmdr Apr 15 14:10:33.954: INFO: Got endpoints: latency-svc-6vmdr [715.470764ms] Apr 15 14:10:33.977: INFO: Created: latency-svc-smzkk Apr 15 14:10:33.985: INFO: Got endpoints: latency-svc-smzkk [682.801438ms] Apr 15 14:10:34.007: INFO: Created: latency-svc-l6swr Apr 15 14:10:34.051: INFO: Got endpoints: latency-svc-l6swr [702.478738ms] Apr 15 14:10:34.070: INFO: Created: latency-svc-7dcq8 Apr 15 14:10:34.082: INFO: Got endpoints: latency-svc-7dcq8 [695.352629ms] Apr 15 14:10:34.102: INFO: Created: latency-svc-dvbx7 Apr 15 14:10:34.111: INFO: Got endpoints: latency-svc-dvbx7 [680.889295ms] Apr 15 14:10:34.131: INFO: Created: latency-svc-mnb7p Apr 15 14:10:34.142: INFO: Got endpoints: latency-svc-mnb7p [642.380324ms] Apr 15 14:10:34.183: INFO: Created: latency-svc-ds6pd Apr 15 14:10:34.196: INFO: Got endpoints: latency-svc-ds6pd [658.455058ms] Apr 15 14:10:34.217: INFO: Created: latency-svc-v5jt9 Apr 15 14:10:34.232: INFO: Got endpoints: latency-svc-v5jt9 [664.664975ms] Apr 15 14:10:34.265: INFO: Created: latency-svc-gdcqx Apr 15 14:10:34.275: INFO: Got endpoints: latency-svc-gdcqx [648.933823ms] Apr 15 14:10:34.332: INFO: Created: latency-svc-hmjgm Apr 15 14:10:34.336: INFO: Got endpoints: latency-svc-hmjgm [683.680435ms] Apr 15 14:10:34.370: INFO: Created: latency-svc-shx9r Apr 15 14:10:34.389: INFO: Got endpoints: latency-svc-shx9r [700.37788ms] Apr 15 14:10:34.422: INFO: Created: latency-svc-6jllc Apr 15 14:10:34.512: INFO: Got endpoints: latency-svc-6jllc [786.177206ms] Apr 15 14:10:34.514: INFO: Created: latency-svc-7f85c Apr 15 14:10:34.522: INFO: Got endpoints: latency-svc-7f85c [748.35101ms] Apr 15 14:10:34.568: INFO: Created: latency-svc-glpv9 Apr 15 14:10:34.582: INFO: Got endpoints: latency-svc-glpv9 [736.579301ms] Apr 15 14:10:34.600: INFO: Created: latency-svc-l7tbj Apr 15 14:10:34.638: INFO: Got endpoints: latency-svc-l7tbj [722.439156ms] Apr 15 14:10:34.648: INFO: Created: latency-svc-t64t2 Apr 15 14:10:34.663: INFO: Got endpoints: latency-svc-t64t2 [708.523407ms] Apr 15 14:10:34.694: INFO: Created: latency-svc-54tqr Apr 15 14:10:34.706: INFO: Got endpoints: latency-svc-54tqr [720.472797ms] Apr 15 14:10:34.725: INFO: Created: latency-svc-mhlmh Apr 15 14:10:34.735: INFO: Got endpoints: latency-svc-mhlmh [684.883762ms] Apr 15 14:10:34.787: INFO: Created: latency-svc-g75cg Apr 15 14:10:34.796: INFO: Got endpoints: latency-svc-g75cg [714.083686ms] Apr 15 14:10:34.817: INFO: Created: latency-svc-l8jhg Apr 15 14:10:34.846: INFO: Got endpoints: latency-svc-l8jhg [734.964223ms] Apr 15 14:10:34.880: INFO: Created: latency-svc-bgsl8 Apr 15 14:10:34.943: INFO: Got endpoints: latency-svc-bgsl8 [801.075049ms] Apr 15 14:10:34.958: INFO: Created: latency-svc-zldhq Apr 15 14:10:34.971: INFO: Got endpoints: latency-svc-zldhq [774.817508ms] Apr 15 14:10:35.021: INFO: Created: latency-svc-dkdwn Apr 15 14:10:35.031: INFO: Got endpoints: latency-svc-dkdwn [799.189787ms] Apr 15 14:10:35.087: INFO: Created: latency-svc-c7l57 Apr 15 14:10:35.099: INFO: Got endpoints: latency-svc-c7l57 [824.049898ms] Apr 15 14:10:35.120: INFO: Created: latency-svc-vlzwf Apr 15 14:10:35.134: INFO: Got endpoints: latency-svc-vlzwf [798.561866ms] Apr 15 14:10:35.150: INFO: Created: latency-svc-mrhw2 Apr 15 14:10:35.158: INFO: Got endpoints: latency-svc-mrhw2 [768.915824ms] Apr 15 14:10:35.174: INFO: Created: latency-svc-kfmcb Apr 15 14:10:35.182: INFO: Got endpoints: latency-svc-kfmcb [670.749554ms] Apr 15 14:10:35.230: INFO: Created: latency-svc-6k8ms Apr 15 14:10:35.249: INFO: Got endpoints: latency-svc-6k8ms [726.85452ms] Apr 15 14:10:35.283: INFO: Created: latency-svc-85r7n Apr 15 14:10:35.297: INFO: Got endpoints: latency-svc-85r7n [714.98124ms] Apr 15 14:10:35.331: INFO: Created: latency-svc-pj7s6 Apr 15 14:10:35.386: INFO: Got endpoints: latency-svc-pj7s6 [747.907618ms] Apr 15 14:10:35.399: INFO: Created: latency-svc-mxr7h Apr 15 14:10:35.412: INFO: Got endpoints: latency-svc-mxr7h [748.937075ms] Apr 15 14:10:35.428: INFO: Created: latency-svc-ts256 Apr 15 14:10:35.442: INFO: Got endpoints: latency-svc-ts256 [736.065994ms] Apr 15 14:10:35.477: INFO: Created: latency-svc-6p2xj Apr 15 14:10:35.518: INFO: Got endpoints: latency-svc-6p2xj [782.163066ms] Apr 15 14:10:35.540: INFO: Created: latency-svc-zbgld Apr 15 14:10:35.557: INFO: Got endpoints: latency-svc-zbgld [760.831833ms] Apr 15 14:10:35.576: INFO: Created: latency-svc-dxpxq Apr 15 14:10:35.587: INFO: Got endpoints: latency-svc-dxpxq [740.200876ms] Apr 15 14:10:35.606: INFO: Created: latency-svc-zp4sm Apr 15 14:10:35.662: INFO: Got endpoints: latency-svc-zp4sm [718.811217ms] Apr 15 14:10:35.681: INFO: Created: latency-svc-q5wwg Apr 15 14:10:35.696: INFO: Got endpoints: latency-svc-q5wwg [724.797947ms] Apr 15 14:10:35.714: INFO: Created: latency-svc-hchlq Apr 15 14:10:35.726: INFO: Got endpoints: latency-svc-hchlq [694.486636ms] Apr 15 14:10:35.744: INFO: Created: latency-svc-bkq8c Apr 15 14:10:35.756: INFO: Got endpoints: latency-svc-bkq8c [657.35587ms] Apr 15 14:10:35.811: INFO: Created: latency-svc-9sk6q Apr 15 14:10:35.817: INFO: Got endpoints: latency-svc-9sk6q [682.037044ms] Apr 15 14:10:35.837: INFO: Created: latency-svc-9rvd6 Apr 15 14:10:35.854: INFO: Got endpoints: latency-svc-9rvd6 [695.470776ms] Apr 15 14:10:35.873: INFO: Created: latency-svc-9c9lh Apr 15 14:10:35.884: INFO: Got endpoints: latency-svc-9c9lh [701.194624ms] Apr 15 14:10:35.950: INFO: Created: latency-svc-xwwzd Apr 15 14:10:35.966: INFO: Got endpoints: latency-svc-xwwzd [716.78861ms] Apr 15 14:10:35.990: INFO: Created: latency-svc-k2p7k Apr 15 14:10:36.004: INFO: Got endpoints: latency-svc-k2p7k [707.195291ms] Apr 15 14:10:36.023: INFO: Created: latency-svc-z9tpb Apr 15 14:10:36.034: INFO: Got endpoints: latency-svc-z9tpb [648.598459ms] Apr 15 14:10:36.093: INFO: Created: latency-svc-wkkvs Apr 15 14:10:36.100: INFO: Got endpoints: latency-svc-wkkvs [688.482678ms] Apr 15 14:10:36.122: INFO: Created: latency-svc-xbfgq Apr 15 14:10:36.137: INFO: Got endpoints: latency-svc-xbfgq [695.240595ms] Apr 15 14:10:36.158: INFO: Created: latency-svc-7t997 Apr 15 14:10:36.167: INFO: Got endpoints: latency-svc-7t997 [649.179832ms] Apr 15 14:10:36.189: INFO: Created: latency-svc-mtz8d Apr 15 14:10:36.231: INFO: Got endpoints: latency-svc-mtz8d [673.527732ms] Apr 15 14:10:36.246: INFO: Created: latency-svc-hgswk Apr 15 14:10:36.258: INFO: Got endpoints: latency-svc-hgswk [671.598586ms] Apr 15 14:10:36.299: INFO: Created: latency-svc-95rlf Apr 15 14:10:36.380: INFO: Got endpoints: latency-svc-95rlf [718.463279ms] Apr 15 14:10:36.381: INFO: Created: latency-svc-rsgnw Apr 15 14:10:36.392: INFO: Got endpoints: latency-svc-rsgnw [696.00943ms] Apr 15 14:10:36.419: INFO: Created: latency-svc-cf2sd Apr 15 14:10:36.433: INFO: Got endpoints: latency-svc-cf2sd [706.462106ms] Apr 15 14:10:36.458: INFO: Created: latency-svc-nvpxh Apr 15 14:10:36.469: INFO: Got endpoints: latency-svc-nvpxh [713.03524ms] Apr 15 14:10:36.542: INFO: Created: latency-svc-45xt5 Apr 15 14:10:36.553: INFO: Got endpoints: latency-svc-45xt5 [736.425508ms] Apr 15 14:10:36.590: INFO: Created: latency-svc-qhzfs Apr 15 14:10:36.620: INFO: Got endpoints: latency-svc-qhzfs [766.475593ms] Apr 15 14:10:36.691: INFO: Created: latency-svc-z8jzt Apr 15 14:10:36.695: INFO: Got endpoints: latency-svc-z8jzt [810.978649ms] Apr 15 14:10:36.731: INFO: Created: latency-svc-hnql9 Apr 15 14:10:36.752: INFO: Got endpoints: latency-svc-hnql9 [786.585663ms] Apr 15 14:10:36.817: INFO: Created: latency-svc-6rv49 Apr 15 14:10:36.823: INFO: Got endpoints: latency-svc-6rv49 [818.694034ms] Apr 15 14:10:36.869: INFO: Created: latency-svc-hrs8g Apr 15 14:10:36.885: INFO: Got endpoints: latency-svc-hrs8g [850.08281ms] Apr 15 14:10:36.967: INFO: Created: latency-svc-kh4d9 Apr 15 14:10:36.981: INFO: Got endpoints: latency-svc-kh4d9 [880.195672ms] Apr 15 14:10:37.016: INFO: Created: latency-svc-xctmm Apr 15 14:10:37.035: INFO: Got endpoints: latency-svc-xctmm [897.766116ms] Apr 15 14:10:37.058: INFO: Created: latency-svc-gks9r Apr 15 14:10:37.086: INFO: Got endpoints: latency-svc-gks9r [919.261379ms] Apr 15 14:10:37.108: INFO: Created: latency-svc-5x65r Apr 15 14:10:37.126: INFO: Got endpoints: latency-svc-5x65r [895.121854ms] Apr 15 14:10:37.145: INFO: Created: latency-svc-g2ptd Apr 15 14:10:37.162: INFO: Got endpoints: latency-svc-g2ptd [903.548488ms] Apr 15 14:10:37.178: INFO: Created: latency-svc-4549d Apr 15 14:10:37.218: INFO: Got endpoints: latency-svc-4549d [837.904244ms] Apr 15 14:10:37.220: INFO: Created: latency-svc-lxn7t Apr 15 14:10:37.234: INFO: Got endpoints: latency-svc-lxn7t [842.636815ms] Apr 15 14:10:37.275: INFO: Created: latency-svc-rlbz4 Apr 15 14:10:37.294: INFO: Got endpoints: latency-svc-rlbz4 [861.788345ms] Apr 15 14:10:37.378: INFO: Created: latency-svc-crddb Apr 15 14:10:37.380: INFO: Got endpoints: latency-svc-crddb [910.122974ms] Apr 15 14:10:37.406: INFO: Created: latency-svc-ng7s4 Apr 15 14:10:37.422: INFO: Got endpoints: latency-svc-ng7s4 [868.598868ms] Apr 15 14:10:37.448: INFO: Created: latency-svc-5jj4p Apr 15 14:10:37.471: INFO: Got endpoints: latency-svc-5jj4p [850.543296ms] Apr 15 14:10:37.531: INFO: Created: latency-svc-9pw5l Apr 15 14:10:37.536: INFO: Got endpoints: latency-svc-9pw5l [841.289805ms] Apr 15 14:10:37.565: INFO: Created: latency-svc-llxlp Apr 15 14:10:37.578: INFO: Got endpoints: latency-svc-llxlp [826.120887ms] Apr 15 14:10:37.598: INFO: Created: latency-svc-7nppx Apr 15 14:10:37.609: INFO: Got endpoints: latency-svc-7nppx [785.679954ms] Apr 15 14:10:37.628: INFO: Created: latency-svc-7rhdp Apr 15 14:10:37.667: INFO: Got endpoints: latency-svc-7rhdp [782.596338ms] Apr 15 14:10:37.676: INFO: Created: latency-svc-7chmt Apr 15 14:10:37.709: INFO: Got endpoints: latency-svc-7chmt [728.022838ms] Apr 15 14:10:37.739: INFO: Created: latency-svc-fxv6x Apr 15 14:10:37.750: INFO: Got endpoints: latency-svc-fxv6x [715.255486ms] Apr 15 14:10:37.812: INFO: Created: latency-svc-rl94r Apr 15 14:10:37.838: INFO: Got endpoints: latency-svc-rl94r [751.2481ms] Apr 15 14:10:37.839: INFO: Created: latency-svc-7wpjh Apr 15 14:10:37.853: INFO: Got endpoints: latency-svc-7wpjh [727.727122ms] Apr 15 14:10:37.874: INFO: Created: latency-svc-299h6 Apr 15 14:10:37.906: INFO: Got endpoints: latency-svc-299h6 [744.029937ms] Apr 15 14:10:37.955: INFO: Created: latency-svc-ldg4k Apr 15 14:10:37.974: INFO: Got endpoints: latency-svc-ldg4k [755.994264ms] Apr 15 14:10:38.003: INFO: Created: latency-svc-p66l5 Apr 15 14:10:38.016: INFO: Got endpoints: latency-svc-p66l5 [781.750392ms] Apr 15 14:10:38.036: INFO: Created: latency-svc-cmmwn Apr 15 14:10:38.074: INFO: Got endpoints: latency-svc-cmmwn [779.983201ms] Apr 15 14:10:38.102: INFO: Created: latency-svc-vhkzq Apr 15 14:10:38.132: INFO: Got endpoints: latency-svc-vhkzq [752.006813ms] Apr 15 14:10:38.152: INFO: Created: latency-svc-5pklb Apr 15 14:10:38.167: INFO: Got endpoints: latency-svc-5pklb [745.147503ms] Apr 15 14:10:38.225: INFO: Created: latency-svc-w9fhj Apr 15 14:10:38.228: INFO: Got endpoints: latency-svc-w9fhj [757.343204ms] Apr 15 14:10:38.265: INFO: Created: latency-svc-zqmpb Apr 15 14:10:38.288: INFO: Got endpoints: latency-svc-zqmpb [751.64166ms] Apr 15 14:10:38.324: INFO: Created: latency-svc-x8bl7 Apr 15 14:10:38.360: INFO: Got endpoints: latency-svc-x8bl7 [781.820056ms] Apr 15 14:10:38.392: INFO: Created: latency-svc-llxz2 Apr 15 14:10:38.416: INFO: Got endpoints: latency-svc-llxz2 [806.97353ms] Apr 15 14:10:38.438: INFO: Created: latency-svc-cgns4 Apr 15 14:10:38.451: INFO: Got endpoints: latency-svc-cgns4 [783.118066ms] Apr 15 14:10:38.501: INFO: Created: latency-svc-rb4rg Apr 15 14:10:38.504: INFO: Got endpoints: latency-svc-rb4rg [795.502718ms] Apr 15 14:10:38.554: INFO: Created: latency-svc-xj8dr Apr 15 14:10:38.571: INFO: Got endpoints: latency-svc-xj8dr [821.033816ms] Apr 15 14:10:38.591: INFO: Created: latency-svc-rvjpx Apr 15 14:10:38.643: INFO: Got endpoints: latency-svc-rvjpx [805.894328ms] Apr 15 14:10:38.667: INFO: Created: latency-svc-fb7sd Apr 15 14:10:38.679: INFO: Got endpoints: latency-svc-fb7sd [825.937485ms] Apr 15 14:10:38.705: INFO: Created: latency-svc-zg8bk Apr 15 14:10:38.716: INFO: Got endpoints: latency-svc-zg8bk [810.299295ms] Apr 15 14:10:38.741: INFO: Created: latency-svc-9xqqd Apr 15 14:10:38.806: INFO: Got endpoints: latency-svc-9xqqd [831.525458ms] Apr 15 14:10:38.809: INFO: Created: latency-svc-9fl7q Apr 15 14:10:38.828: INFO: Got endpoints: latency-svc-9fl7q [811.475946ms] Apr 15 14:10:38.859: INFO: Created: latency-svc-mmrvm Apr 15 14:10:38.873: INFO: Got endpoints: latency-svc-mmrvm [798.512984ms] Apr 15 14:10:38.893: INFO: Created: latency-svc-6dbf9 Apr 15 14:10:38.904: INFO: Got endpoints: latency-svc-6dbf9 [771.854062ms] Apr 15 14:10:38.962: INFO: Created: latency-svc-l76hb Apr 15 14:10:38.996: INFO: Got endpoints: latency-svc-l76hb [828.720178ms] Apr 15 14:10:38.996: INFO: Created: latency-svc-9485b Apr 15 14:10:39.012: INFO: Got endpoints: latency-svc-9485b [783.454161ms] Apr 15 14:10:39.044: INFO: Created: latency-svc-bs9sh Apr 15 14:10:39.060: INFO: Got endpoints: latency-svc-bs9sh [772.297197ms] Apr 15 14:10:39.112: INFO: Created: latency-svc-rp58b Apr 15 14:10:39.114: INFO: Got endpoints: latency-svc-rp58b [753.732746ms] Apr 15 14:10:39.143: INFO: Created: latency-svc-7pp88 Apr 15 14:10:39.151: INFO: Got endpoints: latency-svc-7pp88 [735.30637ms] Apr 15 14:10:39.172: INFO: Created: latency-svc-8zftz Apr 15 14:10:39.194: INFO: Got endpoints: latency-svc-8zftz [743.64552ms] Apr 15 14:10:39.250: INFO: Created: latency-svc-78p6m Apr 15 14:10:39.252: INFO: Got endpoints: latency-svc-78p6m [747.631752ms] Apr 15 14:10:39.272: INFO: Created: latency-svc-8s8pd Apr 15 14:10:39.283: INFO: Got endpoints: latency-svc-8s8pd [711.935189ms] Apr 15 14:10:39.304: INFO: Created: latency-svc-hv2js Apr 15 14:10:39.321: INFO: Got endpoints: latency-svc-hv2js [676.930008ms] Apr 15 14:10:39.341: INFO: Created: latency-svc-qds6k Apr 15 14:10:39.386: INFO: Got endpoints: latency-svc-qds6k [706.470521ms] Apr 15 14:10:39.410: INFO: Created: latency-svc-6k25j Apr 15 14:10:39.424: INFO: Got endpoints: latency-svc-6k25j [707.746078ms] Apr 15 14:10:39.464: INFO: Created: latency-svc-rmrdw Apr 15 14:10:39.542: INFO: Got endpoints: latency-svc-rmrdw [736.134344ms] Apr 15 14:10:39.563: INFO: Created: latency-svc-cfk9d Apr 15 14:10:39.579: INFO: Got endpoints: latency-svc-cfk9d [751.100492ms] Apr 15 14:10:39.617: INFO: Created: latency-svc-7blhd Apr 15 14:10:39.638: INFO: Got endpoints: latency-svc-7blhd [764.675308ms] Apr 15 14:10:39.674: INFO: Created: latency-svc-n7h8v Apr 15 14:10:39.688: INFO: Got endpoints: latency-svc-n7h8v [784.462819ms] Apr 15 14:10:39.710: INFO: Created: latency-svc-d4jgt Apr 15 14:10:39.724: INFO: Got endpoints: latency-svc-d4jgt [728.440231ms] Apr 15 14:10:39.743: INFO: Created: latency-svc-n9cwc Apr 15 14:10:39.754: INFO: Got endpoints: latency-svc-n9cwc [742.477361ms] Apr 15 14:10:39.811: INFO: Created: latency-svc-gcvsl Apr 15 14:10:39.814: INFO: Got endpoints: latency-svc-gcvsl [754.321083ms] Apr 15 14:10:39.860: INFO: Created: latency-svc-dpv7x Apr 15 14:10:39.875: INFO: Got endpoints: latency-svc-dpv7x [760.701015ms] Apr 15 14:10:39.897: INFO: Created: latency-svc-qnbvh Apr 15 14:10:39.955: INFO: Got endpoints: latency-svc-qnbvh [803.492623ms] Apr 15 14:10:39.956: INFO: Created: latency-svc-nbdfc Apr 15 14:10:39.959: INFO: Got endpoints: latency-svc-nbdfc [764.859814ms] Apr 15 14:10:39.989: INFO: Created: latency-svc-th48l Apr 15 14:10:40.002: INFO: Got endpoints: latency-svc-th48l [749.692727ms] Apr 15 14:10:40.002: INFO: Latencies: [78.180434ms 119.991445ms 150.383686ms 232.188775ms 288.359841ms 330.441507ms 396.264213ms 451.489159ms 487.448892ms 547.171065ms 595.973078ms 626.078876ms 640.559912ms 642.380324ms 643.054284ms 647.455043ms 648.598459ms 648.933823ms 649.179832ms 652.967759ms 657.35587ms 658.455058ms 661.684266ms 664.664975ms 664.913291ms 667.606352ms 670.749554ms 670.9205ms 671.598586ms 673.527732ms 673.567755ms 676.930008ms 679.108411ms 680.889295ms 682.037044ms 682.801438ms 683.680435ms 684.883762ms 685.205873ms 685.525457ms 685.596708ms 687.561411ms 688.482678ms 688.537161ms 688.563614ms 689.145911ms 691.06155ms 691.566275ms 692.474062ms 694.22236ms 694.38073ms 694.486636ms 695.240595ms 695.34262ms 695.352629ms 695.470776ms 696.00943ms 697.536579ms 697.55388ms 700.022134ms 700.37788ms 700.496229ms 701.149989ms 701.194624ms 702.478738ms 703.016026ms 704.423268ms 706.462106ms 706.470521ms 707.195291ms 707.746078ms 708.31136ms 708.344723ms 708.523407ms 711.935189ms 712.146838ms 712.504251ms 713.03524ms 714.083686ms 714.98124ms 715.064821ms 715.103022ms 715.255486ms 715.470764ms 716.78861ms 718.174833ms 718.463279ms 718.811217ms 719.147504ms 720.472797ms 722.439156ms 724.273677ms 724.797947ms 726.85452ms 727.727122ms 728.022838ms 728.097735ms 728.440231ms 734.964223ms 735.30637ms 736.065994ms 736.113848ms 736.134344ms 736.157019ms 736.402881ms 736.425508ms 736.579301ms 740.200876ms 740.600488ms 742.477361ms 743.64552ms 744.029937ms 745.147503ms 747.631752ms 747.907618ms 748.164139ms 748.35101ms 748.937075ms 749.692727ms 751.100492ms 751.2481ms 751.64166ms 752.006813ms 752.128427ms 752.660035ms 753.732746ms 754.321083ms 755.835575ms 755.994264ms 757.343204ms 760.701015ms 760.831833ms 764.675308ms 764.859814ms 766.230858ms 766.475593ms 767.880058ms 768.915824ms 771.346918ms 771.854062ms 772.297197ms 774.817508ms 776.858631ms 777.880181ms 779.983201ms 781.750392ms 781.820056ms 782.163066ms 782.596338ms 783.118066ms 783.454161ms 784.136184ms 784.39966ms 784.462819ms 785.679954ms 786.177206ms 786.585663ms 789.745571ms 795.502718ms 798.512984ms 798.561866ms 799.189787ms 800.028112ms 801.075049ms 803.492623ms 804.935057ms 805.894328ms 806.97353ms 808.927579ms 810.299295ms 810.978649ms 811.475946ms 818.694034ms 821.033816ms 824.049898ms 825.937485ms 826.120887ms 828.720178ms 831.525458ms 832.260064ms 832.55041ms 835.611039ms 837.904244ms 837.991406ms 838.830545ms 841.289805ms 842.636815ms 850.08281ms 850.543296ms 860.617347ms 860.907965ms 861.788345ms 862.370472ms 868.598868ms 880.195672ms 895.121854ms 897.766116ms 903.548488ms 910.122974ms 919.261379ms] Apr 15 14:10:40.002: INFO: 50 %ile: 736.065994ms Apr 15 14:10:40.002: INFO: 90 %ile: 832.55041ms Apr 15 14:10:40.002: INFO: 99 %ile: 910.122974ms Apr 15 14:10:40.002: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:10:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2119" for this suite. Apr 15 14:11:00.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:00.133: INFO: namespace svc-latency-2119 deletion completed in 20.09829858s • [SLOW TEST:34.396 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:00.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 15 14:11:00.192: INFO: Waiting up to 5m0s for pod "pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e" in namespace "emptydir-3236" to be "success or failure" Apr 15 14:11:00.231: INFO: Pod "pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.259985ms Apr 15 14:11:02.261: INFO: Pod "pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069428647s Apr 15 14:11:04.265: INFO: Pod "pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073602212s STEP: Saw pod success Apr 15 14:11:04.265: INFO: Pod "pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e" satisfied condition "success or failure" Apr 15 14:11:04.269: INFO: Trying to get logs from node iruya-worker pod pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e container test-container: STEP: delete the pod Apr 15 14:11:04.287: INFO: Waiting for pod pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e to disappear Apr 15 14:11:04.291: INFO: Pod pod-0bcac269-e4c5-44c7-b8e9-a953781a8b1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:04.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3236" for this suite. Apr 15 14:11:10.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:10.384: INFO: namespace emptydir-3236 deletion completed in 6.089248662s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:10.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5fc84c9f-c205-4974-bfec-d588279640cf STEP: Creating a pod to test consume configMaps Apr 15 14:11:10.469: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de" in namespace "projected-3721" to be "success or failure" Apr 15 14:11:10.488: INFO: Pod "pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de": Phase="Pending", Reason="", readiness=false. Elapsed: 19.326266ms Apr 15 14:11:12.492: INFO: Pod "pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022998764s Apr 15 14:11:14.496: INFO: Pod "pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02694826s STEP: Saw pod success Apr 15 14:11:14.496: INFO: Pod "pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de" satisfied condition "success or failure" Apr 15 14:11:14.499: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:11:14.532: INFO: Waiting for pod pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de to disappear Apr 15 14:11:14.543: INFO: Pod pod-projected-configmaps-ccef3776-a6c3-4d8e-9288-17cad8d9f1de no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:14.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3721" for this suite. Apr 15 14:11:20.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:20.634: INFO: namespace projected-3721 deletion completed in 6.086993627s • [SLOW TEST:10.250 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:20.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5088bd96-14c6-4767-bacf-b79608e76715 STEP: Creating a pod to test consume configMaps Apr 15 14:11:20.700: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a" in namespace "projected-4147" to be "success or failure" Apr 15 14:11:20.711: INFO: Pod "pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.658037ms Apr 15 14:11:22.715: INFO: Pod "pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015070782s Apr 15 14:11:24.719: INFO: Pod "pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019239913s STEP: Saw pod success Apr 15 14:11:24.719: INFO: Pod "pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a" satisfied condition "success or failure" Apr 15 14:11:24.722: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:11:24.749: INFO: Waiting for pod pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a to disappear Apr 15 14:11:24.759: INFO: Pod pod-projected-configmaps-26a3f923-7a4b-4dcc-95c7-40e1f2b7eb7a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:24.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4147" for this suite. Apr 15 14:11:30.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:30.881: INFO: namespace projected-4147 deletion completed in 6.118749447s • [SLOW TEST:10.247 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:30.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7834/configmap-test-69b7bcbf-4b8d-4164-9467-181159618bc1 STEP: Creating a pod to test consume configMaps Apr 15 14:11:30.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4" in namespace "configmap-7834" to be "success or failure" Apr 15 14:11:30.969: INFO: Pod "pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706149ms Apr 15 14:11:32.980: INFO: Pod "pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014382716s Apr 15 14:11:34.984: INFO: Pod "pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018448068s STEP: Saw pod success Apr 15 14:11:34.984: INFO: Pod "pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4" satisfied condition "success or failure" Apr 15 14:11:34.987: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4 container env-test: STEP: delete the pod Apr 15 14:11:35.017: INFO: Waiting for pod pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4 to disappear Apr 15 14:11:35.020: INFO: Pod pod-configmaps-c721d6ae-f6a4-43d6-ac32-8dbf9647aea4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:35.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7834" for this suite. Apr 15 14:11:41.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:41.173: INFO: namespace configmap-7834 deletion completed in 6.149259178s • [SLOW TEST:10.292 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:41.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8895/configmap-test-7944e98c-4078-4a55-adc8-bb286ae7f35c STEP: Creating a pod to test consume configMaps Apr 15 14:11:41.287: INFO: Waiting up to 5m0s for pod "pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8" in namespace "configmap-8895" to be "success or failure" Apr 15 14:11:41.302: INFO: Pod "pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.601658ms Apr 15 14:11:43.307: INFO: Pod "pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019780909s Apr 15 14:11:45.310: INFO: Pod "pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023115212s STEP: Saw pod success Apr 15 14:11:45.310: INFO: Pod "pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8" satisfied condition "success or failure" Apr 15 14:11:45.312: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8 container env-test: STEP: delete the pod Apr 15 14:11:45.354: INFO: Waiting for pod pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8 to disappear Apr 15 14:11:45.361: INFO: Pod pod-configmaps-987a479f-ebde-45b2-a9c2-b84c13d162e8 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:45.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8895" for this suite. Apr 15 14:11:51.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:11:51.455: INFO: namespace configmap-8895 deletion completed in 6.091338608s • [SLOW TEST:10.282 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:11:51.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-613c88de-a25f-4c69-9761-b9ac5b7ef818 STEP: Creating a pod to test consume configMaps Apr 15 14:11:51.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3" in namespace "configmap-8631" to be "success or failure" Apr 15 14:11:51.524: INFO: Pod "pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74593ms Apr 15 14:11:53.527: INFO: Pod "pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007172108s Apr 15 14:11:55.531: INFO: Pod "pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011495354s STEP: Saw pod success Apr 15 14:11:55.531: INFO: Pod "pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3" satisfied condition "success or failure" Apr 15 14:11:55.535: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3 container configmap-volume-test: STEP: delete the pod Apr 15 14:11:55.599: INFO: Waiting for pod pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3 to disappear Apr 15 14:11:55.608: INFO: Pod pod-configmaps-81ce4da8-21cd-43ca-b6de-68e0b4ae64b3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:11:55.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8631" for this suite. Apr 15 14:12:01.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:12:01.716: INFO: namespace configmap-8631 deletion completed in 6.105476909s • [SLOW TEST:10.260 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:12:01.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-829fcbad-f198-4b7c-a111-65786e374bf1 STEP: Creating configMap with name cm-test-opt-upd-4cd2781f-959e-446a-b1a5-ae6fd80e399d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-829fcbad-f198-4b7c-a111-65786e374bf1 STEP: Updating configmap cm-test-opt-upd-4cd2781f-959e-446a-b1a5-ae6fd80e399d STEP: Creating configMap with name cm-test-opt-create-76e72764-c207-4e5f-842d-1da516758662 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:12:11.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4935" for this suite. Apr 15 14:12:35.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:12:36.000: INFO: namespace configmap-4935 deletion completed in 24.086633143s • [SLOW TEST:34.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:12:36.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 14:12:40.119: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:12:40.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9855" for this suite. Apr 15 14:12:46.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:12:46.264: INFO: namespace container-runtime-9855 deletion completed in 6.123641559s • [SLOW TEST:10.263 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:12:46.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-9fc47800-8256-41a1-9df0-ed423236ad34 in namespace container-probe-3108 Apr 15 14:12:50.431: INFO: Started pod liveness-9fc47800-8256-41a1-9df0-ed423236ad34 in namespace container-probe-3108 STEP: checking the pod's current state and verifying that restartCount is present Apr 15 14:12:50.435: INFO: Initial restart count of pod liveness-9fc47800-8256-41a1-9df0-ed423236ad34 is 0 Apr 15 14:13:12.518: INFO: Restart count of pod container-probe-3108/liveness-9fc47800-8256-41a1-9df0-ed423236ad34 is now 1 (22.082523975s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:13:12.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3108" for this suite. Apr 15 14:13:18.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:13:18.669: INFO: namespace container-probe-3108 deletion completed in 6.122398244s • [SLOW TEST:32.405 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:13:18.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:13:18.751: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 15 14:13:23.756: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 15 14:13:23.756: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 15 14:13:23.795: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7884,SelfLink:/apis/apps/v1/namespaces/deployment-7884/deployments/test-cleanup-deployment,UID:3a653963-617a-4e0e-abe0-cec490f93f7a,ResourceVersion:5575303,Generation:1,CreationTimestamp:2020-04-15 14:13:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 15 14:13:23.814: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7884,SelfLink:/apis/apps/v1/namespaces/deployment-7884/replicasets/test-cleanup-deployment-55bbcbc84c,UID:c35cc19a-8f6b-48ac-bf16-05abfdb23d05,ResourceVersion:5575305,Generation:1,CreationTimestamp:2020-04-15 14:13:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3a653963-617a-4e0e-abe0-cec490f93f7a 0xc0021858d7 0xc0021858d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 15 14:13:23.814: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 15 14:13:23.815: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7884,SelfLink:/apis/apps/v1/namespaces/deployment-7884/replicasets/test-cleanup-controller,UID:927fb0e7-6170-46a3-8905-8a3ec2278f6f,ResourceVersion:5575304,Generation:1,CreationTimestamp:2020-04-15 14:13:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3a653963-617a-4e0e-abe0-cec490f93f7a 0xc002185807 0xc002185808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 15 14:13:23.852: INFO: Pod "test-cleanup-controller-fbdjb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-fbdjb,GenerateName:test-cleanup-controller-,Namespace:deployment-7884,SelfLink:/api/v1/namespaces/deployment-7884/pods/test-cleanup-controller-fbdjb,UID:9ad4c7b3-218f-47c7-a3bd-ad1213c6c14c,ResourceVersion:5575296,Generation:0,CreationTimestamp:2020-04-15 14:13:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 927fb0e7-6170-46a3-8905-8a3ec2278f6f 0xc00366e197 0xc00366e198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76hqj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76hqj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76hqj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00366e210} {node.kubernetes.io/unreachable Exists NoExecute 0xc00366e230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:13:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:13:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:13:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:13:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.101,StartTime:2020-04-15 14:13:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-15 14:13:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7cb16a918ca1533a55c9748975a26a4cfba788f27c42cbe249db44607e8e9cc4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 15 14:13:23.852: INFO: Pod "test-cleanup-deployment-55bbcbc84c-lg4sn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-lg4sn,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7884,SelfLink:/api/v1/namespaces/deployment-7884/pods/test-cleanup-deployment-55bbcbc84c-lg4sn,UID:d038a603-8786-435c-ab10-e2425444a588,ResourceVersion:5575311,Generation:0,CreationTimestamp:2020-04-15 14:13:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c c35cc19a-8f6b-48ac-bf16-05abfdb23d05 0xc00366e327 0xc00366e328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76hqj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76hqj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-76hqj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00366e3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00366e3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:13:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:13:23.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7884" for this suite. Apr 15 14:13:29.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:13:30.040: INFO: namespace deployment-7884 deletion completed in 6.116649817s • [SLOW TEST:11.370 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:13:30.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:13:30.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41" in namespace "downward-api-3140" to be "success or failure" Apr 15 14:13:30.118: INFO: Pod "downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41": Phase="Pending", Reason="", readiness=false. Elapsed: 17.753908ms Apr 15 14:13:32.131: INFO: Pod "downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031323873s Apr 15 14:13:34.135: INFO: Pod "downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035610535s STEP: Saw pod success Apr 15 14:13:34.135: INFO: Pod "downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41" satisfied condition "success or failure" Apr 15 14:13:34.138: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41 container client-container: STEP: delete the pod Apr 15 14:13:34.159: INFO: Waiting for pod downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41 to disappear Apr 15 14:13:34.227: INFO: Pod downwardapi-volume-1edeb593-60cc-415a-b690-eade192daa41 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:13:34.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3140" for this suite. Apr 15 14:13:40.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:13:40.328: INFO: namespace downward-api-3140 deletion completed in 6.09772092s • [SLOW TEST:10.288 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:13:40.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ad4ac37c-83ae-48a6-a472-5ff160848560 STEP: Creating a pod to test consume configMaps Apr 15 14:13:40.395: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591" in namespace "projected-6609" to be "success or failure" Apr 15 14:13:40.413: INFO: Pod "pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591": Phase="Pending", Reason="", readiness=false. Elapsed: 17.758271ms Apr 15 14:13:42.417: INFO: Pod "pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022052303s Apr 15 14:13:44.422: INFO: Pod "pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027063281s STEP: Saw pod success Apr 15 14:13:44.423: INFO: Pod "pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591" satisfied condition "success or failure" Apr 15 14:13:44.426: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591 container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:13:44.454: INFO: Waiting for pod pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591 to disappear Apr 15 14:13:44.459: INFO: Pod pod-projected-configmaps-70951201-68f0-4968-b983-4cb1492c7591 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:13:44.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6609" for this suite. Apr 15 14:13:50.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:13:50.554: INFO: namespace projected-6609 deletion completed in 6.091084799s • [SLOW TEST:10.223 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:13:50.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 15 14:13:50.618: INFO: Waiting up to 5m0s for pod "pod-6009323e-01bd-45a1-95b9-70ddfd871f76" in namespace "emptydir-3670" to be "success or failure" Apr 15 14:13:50.621: INFO: Pod "pod-6009323e-01bd-45a1-95b9-70ddfd871f76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.553108ms Apr 15 14:13:52.625: INFO: Pod "pod-6009323e-01bd-45a1-95b9-70ddfd871f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007071304s Apr 15 14:13:54.629: INFO: Pod "pod-6009323e-01bd-45a1-95b9-70ddfd871f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011245679s STEP: Saw pod success Apr 15 14:13:54.629: INFO: Pod "pod-6009323e-01bd-45a1-95b9-70ddfd871f76" satisfied condition "success or failure" Apr 15 14:13:54.632: INFO: Trying to get logs from node iruya-worker pod pod-6009323e-01bd-45a1-95b9-70ddfd871f76 container test-container: STEP: delete the pod Apr 15 14:13:54.694: INFO: Waiting for pod pod-6009323e-01bd-45a1-95b9-70ddfd871f76 to disappear Apr 15 14:13:54.697: INFO: Pod pod-6009323e-01bd-45a1-95b9-70ddfd871f76 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:13:54.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3670" for this suite. Apr 15 14:14:00.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:14:00.801: INFO: namespace emptydir-3670 deletion completed in 6.100312838s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:14:00.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-eea16420-4875-4c25-ad2d-ab4bc3c23885 STEP: Creating configMap with name cm-test-opt-upd-e28397e8-ecc3-4485-a4ff-6c63fc4dca36 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-eea16420-4875-4c25-ad2d-ab4bc3c23885 STEP: Updating configmap cm-test-opt-upd-e28397e8-ecc3-4485-a4ff-6c63fc4dca36 STEP: Creating configMap with name cm-test-opt-create-73274b7d-d8ea-40c7-acd9-6d3e74f2693f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:15:19.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7158" for this suite. Apr 15 14:15:41.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:15:41.396: INFO: namespace projected-7158 deletion completed in 22.096411155s • [SLOW TEST:100.595 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:15:41.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c63e733e-8598-423d-bfd6-434c91df9c4d STEP: Creating a pod to test consume secrets Apr 15 14:15:41.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605" in namespace "projected-2115" to be "success or failure" Apr 15 14:15:41.485: INFO: Pod "pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131838ms Apr 15 14:15:43.488: INFO: Pod "pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007547324s Apr 15 14:15:45.493: INFO: Pod "pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011960921s STEP: Saw pod success Apr 15 14:15:45.493: INFO: Pod "pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605" satisfied condition "success or failure" Apr 15 14:15:45.496: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605 container projected-secret-volume-test: STEP: delete the pod Apr 15 14:15:45.516: INFO: Waiting for pod pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605 to disappear Apr 15 14:15:45.551: INFO: Pod pod-projected-secrets-83b47ba6-6c12-4ff5-85bc-7062efc7e605 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:15:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2115" for this suite. Apr 15 14:15:51.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:15:51.642: INFO: namespace projected-2115 deletion completed in 6.088214206s • [SLOW TEST:10.245 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:15:51.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1337, will wait for the garbage collector to delete the pods Apr 15 14:15:55.764: INFO: Deleting Job.batch foo took: 5.366464ms Apr 15 14:15:56.064: INFO: Terminating Job.batch foo pods took: 300.238288ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:16:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1337" for this suite. Apr 15 14:16:48.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:16:48.259: INFO: namespace job-1337 deletion completed in 6.087588784s • [SLOW TEST:56.616 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:16:48.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 15 14:16:53.365: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:16:54.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8195" for this suite. Apr 15 14:17:16.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:17:16.501: INFO: namespace replicaset-8195 deletion completed in 22.09719997s • [SLOW TEST:28.242 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:17:16.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:17:16.604: INFO: Create a RollingUpdate DaemonSet Apr 15 14:17:16.608: INFO: Check that daemon pods launch on every node of the cluster Apr 15 14:17:16.613: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:16.618: INFO: Number of nodes with available pods: 0 Apr 15 14:17:16.618: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:17:17.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:17.626: INFO: Number of nodes with available pods: 0 Apr 15 14:17:17.626: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:17:18.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:18.626: INFO: Number of nodes with available pods: 0 Apr 15 14:17:18.626: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:17:19.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:19.703: INFO: Number of nodes with available pods: 0 Apr 15 14:17:19.703: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:17:20.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:20.626: INFO: Number of nodes with available pods: 1 Apr 15 14:17:20.626: INFO: Node iruya-worker is running more than one daemon pod Apr 15 14:17:21.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:21.625: INFO: Number of nodes with available pods: 2 Apr 15 14:17:21.625: INFO: Number of running nodes: 2, number of available pods: 2 Apr 15 14:17:21.625: INFO: Update the DaemonSet to trigger a rollout Apr 15 14:17:21.631: INFO: Updating DaemonSet daemon-set Apr 15 14:17:24.650: INFO: Roll back the DaemonSet before rollout is complete Apr 15 14:17:24.656: INFO: Updating DaemonSet daemon-set Apr 15 14:17:24.656: INFO: Make sure DaemonSet rollback is complete Apr 15 14:17:24.685: INFO: Wrong image for pod: daemon-set-kx5jz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 15 14:17:24.685: INFO: Pod daemon-set-kx5jz is not available Apr 15 14:17:24.697: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:25.701: INFO: Wrong image for pod: daemon-set-kx5jz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 15 14:17:25.701: INFO: Pod daemon-set-kx5jz is not available Apr 15 14:17:25.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:26.702: INFO: Wrong image for pod: daemon-set-kx5jz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 15 14:17:26.702: INFO: Pod daemon-set-kx5jz is not available Apr 15 14:17:26.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 14:17:27.702: INFO: Pod daemon-set-wm5lz is not available Apr 15 14:17:27.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6952, will wait for the garbage collector to delete the pods Apr 15 14:17:27.777: INFO: Deleting DaemonSet.extensions daemon-set took: 12.317833ms Apr 15 14:17:28.078: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.800444ms Apr 15 14:17:31.082: INFO: Number of nodes with available pods: 0 Apr 15 14:17:31.082: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 14:17:31.084: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6952/daemonsets","resourceVersion":"5576140"},"items":null} Apr 15 14:17:31.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6952/pods","resourceVersion":"5576140"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:17:31.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6952" for this suite. Apr 15 14:17:37.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:17:37.208: INFO: namespace daemonsets-6952 deletion completed in 6.109771604s • [SLOW TEST:20.707 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:17:37.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 15 14:17:37.237: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix851452531/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:17:37.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2697" for this suite. Apr 15 14:17:43.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:17:43.432: INFO: namespace kubectl-2697 deletion completed in 6.092195609s • [SLOW TEST:6.223 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:17:43.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:17:47.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4227" for this suite. Apr 15 14:17:53.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:17:53.715: INFO: namespace emptydir-wrapper-4227 deletion completed in 6.095850083s • [SLOW TEST:10.283 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:17:53.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:17:53.767: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.270818ms) Apr 15 14:17:53.771: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.610966ms) Apr 15 14:17:53.774: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.400762ms) Apr 15 14:17:53.777: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.439583ms) Apr 15 14:17:53.781: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.990146ms) Apr 15 14:17:53.784: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.946987ms) Apr 15 14:17:53.786: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.676645ms) Apr 15 14:17:53.789: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.974881ms) Apr 15 14:17:53.792: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.059726ms) Apr 15 14:17:53.796: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.268545ms) Apr 15 14:17:53.799: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.221727ms) Apr 15 14:17:53.802: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.060915ms) Apr 15 14:17:53.805: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.405683ms) Apr 15 14:17:53.809: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.948871ms) Apr 15 14:17:53.830: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 20.294001ms) Apr 15 14:17:53.833: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.128905ms) Apr 15 14:17:53.836: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.962732ms) Apr 15 14:17:53.840: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.676919ms) Apr 15 14:17:53.847: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 7.02307ms) Apr 15 14:17:53.850: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.279715ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:17:53.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-562" for this suite. Apr 15 14:17:59.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:17:59.940: INFO: namespace proxy-562 deletion completed in 6.08738126s • [SLOW TEST:6.225 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:17:59.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 15 14:18:00.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2943,SelfLink:/api/v1/namespaces/watch-2943/configmaps/e2e-watch-test-resource-version,UID:36bb59bd-32a6-477f-b708-24c74e33580f,ResourceVersion:5576275,Generation:0,CreationTimestamp:2020-04-15 14:18:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 15 14:18:00.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2943,SelfLink:/api/v1/namespaces/watch-2943/configmaps/e2e-watch-test-resource-version,UID:36bb59bd-32a6-477f-b708-24c74e33580f,ResourceVersion:5576276,Generation:0,CreationTimestamp:2020-04-15 14:18:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:00.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2943" for this suite. Apr 15 14:18:06.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:18:06.172: INFO: namespace watch-2943 deletion completed in 6.121715135s • [SLOW TEST:6.231 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:18:06.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 15 14:18:06.227: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7888" for this suite. Apr 15 14:18:19.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:18:19.140: INFO: namespace init-container-7888 deletion completed in 6.101523785s • [SLOW TEST:12.968 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:18:19.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 15 14:18:19.196: INFO: Waiting up to 5m0s for pod "pod-cb1de506-66fc-47cf-9d02-827ac10be2de" in namespace "emptydir-8092" to be "success or failure" Apr 15 14:18:19.199: INFO: Pod "pod-cb1de506-66fc-47cf-9d02-827ac10be2de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648889ms Apr 15 14:18:21.224: INFO: Pod "pod-cb1de506-66fc-47cf-9d02-827ac10be2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028560361s Apr 15 14:18:23.339: INFO: Pod "pod-cb1de506-66fc-47cf-9d02-827ac10be2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143167663s STEP: Saw pod success Apr 15 14:18:23.339: INFO: Pod "pod-cb1de506-66fc-47cf-9d02-827ac10be2de" satisfied condition "success or failure" Apr 15 14:18:23.350: INFO: Trying to get logs from node iruya-worker pod pod-cb1de506-66fc-47cf-9d02-827ac10be2de container test-container: STEP: delete the pod Apr 15 14:18:23.377: INFO: Waiting for pod pod-cb1de506-66fc-47cf-9d02-827ac10be2de to disappear Apr 15 14:18:23.391: INFO: Pod pod-cb1de506-66fc-47cf-9d02-827ac10be2de no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:23.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8092" for this suite. Apr 15 14:18:29.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:18:29.489: INFO: namespace emptydir-8092 deletion completed in 6.09406097s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:18:29.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:18:29.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b" in namespace "projected-362" to be "success or failure" Apr 15 14:18:29.575: INFO: Pod "downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.759603ms Apr 15 14:18:31.596: INFO: Pod "downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052831566s Apr 15 14:18:33.600: INFO: Pod "downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05737501s STEP: Saw pod success Apr 15 14:18:33.600: INFO: Pod "downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b" satisfied condition "success or failure" Apr 15 14:18:33.604: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b container client-container: STEP: delete the pod Apr 15 14:18:33.621: INFO: Waiting for pod downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b to disappear Apr 15 14:18:33.625: INFO: Pod downwardapi-volume-c1a1894c-0569-486f-b1bd-63fd8f3de79b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:33.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-362" for this suite. Apr 15 14:18:39.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:18:39.746: INFO: namespace projected-362 deletion completed in 6.11804043s • [SLOW TEST:10.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:18:39.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 15 14:18:39.825: INFO: Waiting up to 5m0s for pod "pod-6717a661-8a15-42b4-a45a-62de4f021b79" in namespace "emptydir-7277" to be "success or failure" Apr 15 14:18:39.828: INFO: Pod "pod-6717a661-8a15-42b4-a45a-62de4f021b79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530442ms Apr 15 14:18:41.832: INFO: Pod "pod-6717a661-8a15-42b4-a45a-62de4f021b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00753449s Apr 15 14:18:43.837: INFO: Pod "pod-6717a661-8a15-42b4-a45a-62de4f021b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012158957s STEP: Saw pod success Apr 15 14:18:43.837: INFO: Pod "pod-6717a661-8a15-42b4-a45a-62de4f021b79" satisfied condition "success or failure" Apr 15 14:18:43.840: INFO: Trying to get logs from node iruya-worker pod pod-6717a661-8a15-42b4-a45a-62de4f021b79 container test-container: STEP: delete the pod Apr 15 14:18:43.860: INFO: Waiting for pod pod-6717a661-8a15-42b4-a45a-62de4f021b79 to disappear Apr 15 14:18:43.882: INFO: Pod pod-6717a661-8a15-42b4-a45a-62de4f021b79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:43.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7277" for this suite. Apr 15 14:18:49.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:18:49.973: INFO: namespace emptydir-7277 deletion completed in 6.08835092s • [SLOW TEST:10.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:18:49.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 15 14:18:50.052: INFO: Waiting up to 5m0s for pod "pod-2a578a1b-d067-406a-b004-3324fa19d1b1" in namespace "emptydir-8115" to be "success or failure" Apr 15 14:18:50.069: INFO: Pod "pod-2a578a1b-d067-406a-b004-3324fa19d1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.866998ms Apr 15 14:18:52.073: INFO: Pod "pod-2a578a1b-d067-406a-b004-3324fa19d1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020930844s Apr 15 14:18:54.077: INFO: Pod "pod-2a578a1b-d067-406a-b004-3324fa19d1b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02480562s STEP: Saw pod success Apr 15 14:18:54.077: INFO: Pod "pod-2a578a1b-d067-406a-b004-3324fa19d1b1" satisfied condition "success or failure" Apr 15 14:18:54.080: INFO: Trying to get logs from node iruya-worker pod pod-2a578a1b-d067-406a-b004-3324fa19d1b1 container test-container: STEP: delete the pod Apr 15 14:18:54.100: INFO: Waiting for pod pod-2a578a1b-d067-406a-b004-3324fa19d1b1 to disappear Apr 15 14:18:54.105: INFO: Pod pod-2a578a1b-d067-406a-b004-3324fa19d1b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:18:54.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8115" for this suite. Apr 15 14:19:00.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:19:00.199: INFO: namespace emptydir-8115 deletion completed in 6.090565849s • [SLOW TEST:10.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:19:00.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:19:00.276: INFO: Creating ReplicaSet my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f Apr 15 14:19:00.290: INFO: Pod name my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f: Found 0 pods out of 1 Apr 15 14:19:05.294: INFO: Pod name my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f: Found 1 pods out of 1 Apr 15 14:19:05.295: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f" is running Apr 15 14:19:05.298: INFO: Pod "my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f-t42nw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 14:19:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 14:19:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 14:19:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-15 14:19:00 +0000 UTC Reason: Message:}]) Apr 15 14:19:05.298: INFO: Trying to dial the pod Apr 15 14:19:10.311: INFO: Controller my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f: Got expected result from replica 1 [my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f-t42nw]: "my-hostname-basic-77372dd2-cbe1-40b5-bc0a-2dc83121723f-t42nw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:19:10.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8044" for this suite. Apr 15 14:19:16.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:19:16.428: INFO: namespace replicaset-8044 deletion completed in 6.112489044s • [SLOW TEST:16.228 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:19:16.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 15 14:19:16.497: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576599,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 15 14:19:16.498: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576600,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 15 14:19:16.498: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576601,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 15 14:19:26.560: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576622,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 15 14:19:26.560: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576623,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 15 14:19:26.560: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3366,SelfLink:/api/v1/namespaces/watch-3366/configmaps/e2e-watch-test-label-changed,UID:0df02923-3321-41c7-8e78-0a334f23b78f,ResourceVersion:5576624,Generation:0,CreationTimestamp:2020-04-15 14:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:19:26.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3366" for this suite. Apr 15 14:19:32.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:19:32.662: INFO: namespace watch-3366 deletion completed in 6.095410257s • [SLOW TEST:16.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:19:32.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-32d4fa89-8329-42ff-b9c2-6c1f16e92138 STEP: Creating a pod to test consume configMaps Apr 15 14:19:32.784: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828" in namespace "projected-2396" to be "success or failure" Apr 15 14:19:32.788: INFO: Pod "pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980792ms Apr 15 14:19:34.793: INFO: Pod "pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008427116s Apr 15 14:19:36.798: INFO: Pod "pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013255512s STEP: Saw pod success Apr 15 14:19:36.798: INFO: Pod "pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828" satisfied condition "success or failure" Apr 15 14:19:36.801: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828 container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:19:36.836: INFO: Waiting for pod pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828 to disappear Apr 15 14:19:36.866: INFO: Pod pod-projected-configmaps-f6a7b64e-f9ec-4829-81bb-5d176c722828 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:19:36.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2396" for this suite. Apr 15 14:19:42.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:19:42.966: INFO: namespace projected-2396 deletion completed in 6.094501813s • [SLOW TEST:10.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:19:42.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:19:43.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e" in namespace "projected-3811" to be "success or failure" Apr 15 14:19:43.046: INFO: Pod "downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.066343ms Apr 15 14:19:45.051: INFO: Pod "downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044412045s Apr 15 14:19:47.056: INFO: Pod "downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049287834s STEP: Saw pod success Apr 15 14:19:47.056: INFO: Pod "downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e" satisfied condition "success or failure" Apr 15 14:19:47.059: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e container client-container: STEP: delete the pod Apr 15 14:19:47.102: INFO: Waiting for pod downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e to disappear Apr 15 14:19:47.130: INFO: Pod downwardapi-volume-485fec1b-a6a7-415b-b170-7d5095246a4e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:19:47.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3811" for this suite. Apr 15 14:19:53.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:19:53.240: INFO: namespace projected-3811 deletion completed in 6.084856813s • [SLOW TEST:10.274 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:19:53.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-756 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-756 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-756 Apr 15 14:19:53.315: INFO: Found 0 stateful pods, waiting for 1 Apr 15 14:20:03.328: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 15 14:20:03.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:20:05.817: INFO: stderr: "I0415 14:20:05.699444 2466 log.go:172] (0xc00083e8f0) (0xc000376780) Create stream\nI0415 14:20:05.699490 2466 log.go:172] (0xc00083e8f0) (0xc000376780) Stream added, broadcasting: 1\nI0415 14:20:05.702716 2466 log.go:172] (0xc00083e8f0) Reply frame received for 1\nI0415 14:20:05.702774 2466 log.go:172] (0xc00083e8f0) (0xc0008b0000) Create stream\nI0415 14:20:05.702795 2466 log.go:172] (0xc00083e8f0) (0xc0008b0000) Stream added, broadcasting: 3\nI0415 14:20:05.703830 2466 log.go:172] (0xc00083e8f0) Reply frame received for 3\nI0415 14:20:05.703883 2466 log.go:172] (0xc00083e8f0) (0xc000918000) Create stream\nI0415 14:20:05.703905 2466 log.go:172] (0xc00083e8f0) (0xc000918000) Stream added, broadcasting: 5\nI0415 14:20:05.704931 2466 log.go:172] (0xc00083e8f0) Reply frame received for 5\nI0415 14:20:05.778852 2466 log.go:172] (0xc00083e8f0) Data frame received for 5\nI0415 14:20:05.778887 2466 log.go:172] (0xc000918000) (5) Data frame handling\nI0415 14:20:05.778907 2466 log.go:172] (0xc000918000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:20:05.807623 2466 log.go:172] (0xc00083e8f0) Data frame received for 3\nI0415 14:20:05.807655 2466 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0415 14:20:05.807672 2466 log.go:172] (0xc0008b0000) (3) Data frame sent\nI0415 14:20:05.807689 2466 log.go:172] (0xc00083e8f0) Data frame received for 3\nI0415 14:20:05.807704 2466 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0415 14:20:05.807771 2466 log.go:172] (0xc00083e8f0) Data frame received for 5\nI0415 14:20:05.807789 2466 log.go:172] (0xc000918000) (5) Data frame handling\nI0415 14:20:05.810295 2466 log.go:172] (0xc00083e8f0) Data frame received for 1\nI0415 14:20:05.810403 2466 log.go:172] (0xc000376780) (1) Data frame handling\nI0415 14:20:05.810439 2466 log.go:172] (0xc000376780) (1) Data frame sent\nI0415 14:20:05.810464 2466 log.go:172] (0xc00083e8f0) (0xc000376780) Stream removed, broadcasting: 1\nI0415 14:20:05.810489 2466 log.go:172] (0xc00083e8f0) Go away received\nI0415 14:20:05.811066 2466 log.go:172] (0xc00083e8f0) (0xc000376780) Stream removed, broadcasting: 1\nI0415 14:20:05.811091 2466 log.go:172] (0xc00083e8f0) (0xc0008b0000) Stream removed, broadcasting: 3\nI0415 14:20:05.811103 2466 log.go:172] (0xc00083e8f0) (0xc000918000) Stream removed, broadcasting: 5\n" Apr 15 14:20:05.817: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:20:05.817: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:20:05.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 15 14:20:15.836: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:20:15.836: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:20:15.855: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:15.855: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC }] Apr 15 14:20:15.855: INFO: Apr 15 14:20:15.855: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 15 14:20:16.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991896748s Apr 15 14:20:17.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98688801s Apr 15 14:20:18.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977986977s Apr 15 14:20:19.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.955383505s Apr 15 14:20:20.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.88952435s Apr 15 14:20:21.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.884025928s Apr 15 14:20:22.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.879035629s Apr 15 14:20:23.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.873772581s Apr 15 14:20:24.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.231527ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-756 Apr 15 14:20:25.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:20:26.224: INFO: stderr: "I0415 14:20:26.122004 2497 log.go:172] (0xc0008d6420) (0xc00052a6e0) Create stream\nI0415 14:20:26.122069 2497 log.go:172] (0xc0008d6420) (0xc00052a6e0) Stream added, broadcasting: 1\nI0415 14:20:26.125798 2497 log.go:172] (0xc0008d6420) Reply frame received for 1\nI0415 14:20:26.125862 2497 log.go:172] (0xc0008d6420) (0xc00052a000) Create stream\nI0415 14:20:26.125886 2497 log.go:172] (0xc0008d6420) (0xc00052a000) Stream added, broadcasting: 3\nI0415 14:20:26.127030 2497 log.go:172] (0xc0008d6420) Reply frame received for 3\nI0415 14:20:26.127076 2497 log.go:172] (0xc0008d6420) (0xc00050e280) Create stream\nI0415 14:20:26.127091 2497 log.go:172] (0xc0008d6420) (0xc00050e280) Stream added, broadcasting: 5\nI0415 14:20:26.128017 2497 log.go:172] (0xc0008d6420) Reply frame received for 5\nI0415 14:20:26.216990 2497 log.go:172] (0xc0008d6420) Data frame received for 3\nI0415 14:20:26.217033 2497 log.go:172] (0xc00052a000) (3) Data frame handling\nI0415 14:20:26.217053 2497 log.go:172] (0xc00052a000) (3) Data frame sent\nI0415 14:20:26.217070 2497 log.go:172] (0xc0008d6420) Data frame received for 3\nI0415 14:20:26.217086 2497 log.go:172] (0xc00052a000) (3) Data frame handling\nI0415 14:20:26.217286 2497 log.go:172] (0xc0008d6420) Data frame received for 5\nI0415 14:20:26.217317 2497 log.go:172] (0xc00050e280) (5) Data frame handling\nI0415 14:20:26.217338 2497 log.go:172] (0xc00050e280) (5) Data frame sent\nI0415 14:20:26.217353 2497 log.go:172] (0xc0008d6420) Data frame received for 5\nI0415 14:20:26.217374 2497 log.go:172] (0xc00050e280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 14:20:26.218989 2497 log.go:172] (0xc0008d6420) Data frame received for 1\nI0415 14:20:26.219014 2497 log.go:172] (0xc00052a6e0) (1) Data frame handling\nI0415 14:20:26.219026 2497 log.go:172] (0xc00052a6e0) (1) Data frame sent\nI0415 14:20:26.219141 2497 log.go:172] (0xc0008d6420) (0xc00052a6e0) Stream removed, broadcasting: 1\nI0415 14:20:26.219374 2497 log.go:172] (0xc0008d6420) Go away received\nI0415 14:20:26.219573 2497 log.go:172] (0xc0008d6420) (0xc00052a6e0) Stream removed, broadcasting: 1\nI0415 14:20:26.219599 2497 log.go:172] (0xc0008d6420) (0xc00052a000) Stream removed, broadcasting: 3\nI0415 14:20:26.219608 2497 log.go:172] (0xc0008d6420) (0xc00050e280) Stream removed, broadcasting: 5\n" Apr 15 14:20:26.224: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:20:26.224: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:20:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:20:26.420: INFO: stderr: "I0415 14:20:26.347940 2520 log.go:172] (0xc00012a790) (0xc00067a960) Create stream\nI0415 14:20:26.347991 2520 log.go:172] (0xc00012a790) (0xc00067a960) Stream added, broadcasting: 1\nI0415 14:20:26.349958 2520 log.go:172] (0xc00012a790) Reply frame received for 1\nI0415 14:20:26.349985 2520 log.go:172] (0xc00012a790) (0xc00067aaa0) Create stream\nI0415 14:20:26.349992 2520 log.go:172] (0xc00012a790) (0xc00067aaa0) Stream added, broadcasting: 3\nI0415 14:20:26.350715 2520 log.go:172] (0xc00012a790) Reply frame received for 3\nI0415 14:20:26.350765 2520 log.go:172] (0xc00012a790) (0xc0002fc000) Create stream\nI0415 14:20:26.350778 2520 log.go:172] (0xc00012a790) (0xc0002fc000) Stream added, broadcasting: 5\nI0415 14:20:26.352328 2520 log.go:172] (0xc00012a790) Reply frame received for 5\nI0415 14:20:26.413523 2520 log.go:172] (0xc00012a790) Data frame received for 3\nI0415 14:20:26.413559 2520 log.go:172] (0xc00067aaa0) (3) Data frame handling\nI0415 14:20:26.413570 2520 log.go:172] (0xc00067aaa0) (3) Data frame sent\nI0415 14:20:26.413578 2520 log.go:172] (0xc00012a790) Data frame received for 3\nI0415 14:20:26.413584 2520 log.go:172] (0xc00067aaa0) (3) Data frame handling\nI0415 14:20:26.413616 2520 log.go:172] (0xc00012a790) Data frame received for 5\nI0415 14:20:26.413627 2520 log.go:172] (0xc0002fc000) (5) Data frame handling\nI0415 14:20:26.413638 2520 log.go:172] (0xc0002fc000) (5) Data frame sent\nI0415 14:20:26.413646 2520 log.go:172] (0xc00012a790) Data frame received for 5\nI0415 14:20:26.413653 2520 log.go:172] (0xc0002fc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0415 14:20:26.415108 2520 log.go:172] (0xc00012a790) Data frame received for 1\nI0415 14:20:26.415147 2520 log.go:172] (0xc00067a960) (1) Data frame handling\nI0415 14:20:26.415168 2520 log.go:172] (0xc00067a960) (1) Data frame sent\nI0415 14:20:26.415200 2520 log.go:172] (0xc00012a790) (0xc00067a960) Stream removed, broadcasting: 1\nI0415 14:20:26.415227 2520 log.go:172] (0xc00012a790) Go away received\nI0415 14:20:26.415644 2520 log.go:172] (0xc00012a790) (0xc00067a960) Stream removed, broadcasting: 1\nI0415 14:20:26.415671 2520 log.go:172] (0xc00012a790) (0xc00067aaa0) Stream removed, broadcasting: 3\nI0415 14:20:26.415694 2520 log.go:172] (0xc00012a790) (0xc0002fc000) Stream removed, broadcasting: 5\n" Apr 15 14:20:26.420: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:20:26.420: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:20:26.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:20:26.635: INFO: stderr: "I0415 14:20:26.557523 2542 log.go:172] (0xc000116e70) (0xc00064c780) Create stream\nI0415 14:20:26.557578 2542 log.go:172] (0xc000116e70) (0xc00064c780) Stream added, broadcasting: 1\nI0415 14:20:26.560039 2542 log.go:172] (0xc000116e70) Reply frame received for 1\nI0415 14:20:26.560071 2542 log.go:172] (0xc000116e70) (0xc0007b4000) Create stream\nI0415 14:20:26.560079 2542 log.go:172] (0xc000116e70) (0xc0007b4000) Stream added, broadcasting: 3\nI0415 14:20:26.560887 2542 log.go:172] (0xc000116e70) Reply frame received for 3\nI0415 14:20:26.560915 2542 log.go:172] (0xc000116e70) (0xc000278000) Create stream\nI0415 14:20:26.560923 2542 log.go:172] (0xc000116e70) (0xc000278000) Stream added, broadcasting: 5\nI0415 14:20:26.561811 2542 log.go:172] (0xc000116e70) Reply frame received for 5\nI0415 14:20:26.629781 2542 log.go:172] (0xc000116e70) Data frame received for 5\nI0415 14:20:26.629826 2542 log.go:172] (0xc000278000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0415 14:20:26.629857 2542 log.go:172] (0xc000116e70) Data frame received for 3\nI0415 14:20:26.629883 2542 log.go:172] (0xc0007b4000) (3) Data frame handling\nI0415 14:20:26.629893 2542 log.go:172] (0xc0007b4000) (3) Data frame sent\nI0415 14:20:26.629923 2542 log.go:172] (0xc000116e70) Data frame received for 3\nI0415 14:20:26.629933 2542 log.go:172] (0xc0007b4000) (3) Data frame handling\nI0415 14:20:26.629947 2542 log.go:172] (0xc000278000) (5) Data frame sent\nI0415 14:20:26.629957 2542 log.go:172] (0xc000116e70) Data frame received for 5\nI0415 14:20:26.629971 2542 log.go:172] (0xc000278000) (5) Data frame handling\nI0415 14:20:26.631045 2542 log.go:172] (0xc000116e70) Data frame received for 1\nI0415 14:20:26.631075 2542 log.go:172] (0xc00064c780) (1) Data frame handling\nI0415 14:20:26.631090 2542 log.go:172] (0xc00064c780) (1) Data frame sent\nI0415 14:20:26.631113 2542 log.go:172] (0xc000116e70) (0xc00064c780) Stream removed, broadcasting: 1\nI0415 14:20:26.631131 2542 log.go:172] (0xc000116e70) Go away received\nI0415 14:20:26.631471 2542 log.go:172] (0xc000116e70) (0xc00064c780) Stream removed, broadcasting: 1\nI0415 14:20:26.631487 2542 log.go:172] (0xc000116e70) (0xc0007b4000) Stream removed, broadcasting: 3\nI0415 14:20:26.631495 2542 log.go:172] (0xc000116e70) (0xc000278000) Stream removed, broadcasting: 5\n" Apr 15 14:20:26.635: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:20:26.635: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:20:26.639: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 14:20:26.639: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 14:20:26.639: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 15 14:20:26.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:20:26.844: INFO: stderr: "I0415 14:20:26.768895 2562 log.go:172] (0xc0009ee4d0) (0xc0006c2640) Create stream\nI0415 14:20:26.768940 2562 log.go:172] (0xc0009ee4d0) (0xc0006c2640) Stream added, broadcasting: 1\nI0415 14:20:26.773534 2562 log.go:172] (0xc0009ee4d0) Reply frame received for 1\nI0415 14:20:26.773605 2562 log.go:172] (0xc0009ee4d0) (0xc000596460) Create stream\nI0415 14:20:26.773639 2562 log.go:172] (0xc0009ee4d0) (0xc000596460) Stream added, broadcasting: 3\nI0415 14:20:26.774701 2562 log.go:172] (0xc0009ee4d0) Reply frame received for 3\nI0415 14:20:26.774756 2562 log.go:172] (0xc0009ee4d0) (0xc0006ee500) Create stream\nI0415 14:20:26.774778 2562 log.go:172] (0xc0009ee4d0) (0xc0006ee500) Stream added, broadcasting: 5\nI0415 14:20:26.775770 2562 log.go:172] (0xc0009ee4d0) Reply frame received for 5\nI0415 14:20:26.838455 2562 log.go:172] (0xc0009ee4d0) Data frame received for 5\nI0415 14:20:26.838505 2562 log.go:172] (0xc0006ee500) (5) Data frame handling\nI0415 14:20:26.838518 2562 log.go:172] (0xc0006ee500) (5) Data frame sent\nI0415 14:20:26.838526 2562 log.go:172] (0xc0009ee4d0) Data frame received for 5\nI0415 14:20:26.838532 2562 log.go:172] (0xc0006ee500) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:20:26.838553 2562 log.go:172] (0xc0009ee4d0) Data frame received for 3\nI0415 14:20:26.838561 2562 log.go:172] (0xc000596460) (3) Data frame handling\nI0415 14:20:26.838568 2562 log.go:172] (0xc000596460) (3) Data frame sent\nI0415 14:20:26.838575 2562 log.go:172] (0xc0009ee4d0) Data frame received for 3\nI0415 14:20:26.838581 2562 log.go:172] (0xc000596460) (3) Data frame handling\nI0415 14:20:26.840053 2562 log.go:172] (0xc0009ee4d0) Data frame received for 1\nI0415 14:20:26.840081 2562 log.go:172] (0xc0006c2640) (1) Data frame handling\nI0415 14:20:26.840094 2562 log.go:172] (0xc0006c2640) (1) Data frame sent\nI0415 14:20:26.840105 2562 log.go:172] (0xc0009ee4d0) (0xc0006c2640) Stream removed, broadcasting: 1\nI0415 14:20:26.840121 2562 log.go:172] (0xc0009ee4d0) Go away received\nI0415 14:20:26.840566 2562 log.go:172] (0xc0009ee4d0) (0xc0006c2640) Stream removed, broadcasting: 1\nI0415 14:20:26.840588 2562 log.go:172] (0xc0009ee4d0) (0xc000596460) Stream removed, broadcasting: 3\nI0415 14:20:26.840598 2562 log.go:172] (0xc0009ee4d0) (0xc0006ee500) Stream removed, broadcasting: 5\n" Apr 15 14:20:26.844: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:20:26.844: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:20:26.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:20:27.086: INFO: stderr: "I0415 14:20:26.971851 2583 log.go:172] (0xc0009a4580) (0xc0006d0960) Create stream\nI0415 14:20:26.971922 2583 log.go:172] (0xc0009a4580) (0xc0006d0960) Stream added, broadcasting: 1\nI0415 14:20:26.975304 2583 log.go:172] (0xc0009a4580) Reply frame received for 1\nI0415 14:20:26.975357 2583 log.go:172] (0xc0009a4580) (0xc000842000) Create stream\nI0415 14:20:26.975374 2583 log.go:172] (0xc0009a4580) (0xc000842000) Stream added, broadcasting: 3\nI0415 14:20:26.976335 2583 log.go:172] (0xc0009a4580) Reply frame received for 3\nI0415 14:20:26.976361 2583 log.go:172] (0xc0009a4580) (0xc0006d0a00) Create stream\nI0415 14:20:26.976370 2583 log.go:172] (0xc0009a4580) (0xc0006d0a00) Stream added, broadcasting: 5\nI0415 14:20:26.977530 2583 log.go:172] (0xc0009a4580) Reply frame received for 5\nI0415 14:20:27.049317 2583 log.go:172] (0xc0009a4580) Data frame received for 5\nI0415 14:20:27.049355 2583 log.go:172] (0xc0006d0a00) (5) Data frame handling\nI0415 14:20:27.049371 2583 log.go:172] (0xc0006d0a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:20:27.079984 2583 log.go:172] (0xc0009a4580) Data frame received for 3\nI0415 14:20:27.080025 2583 log.go:172] (0xc000842000) (3) Data frame handling\nI0415 14:20:27.080034 2583 log.go:172] (0xc000842000) (3) Data frame sent\nI0415 14:20:27.080039 2583 log.go:172] (0xc0009a4580) Data frame received for 3\nI0415 14:20:27.080073 2583 log.go:172] (0xc0009a4580) Data frame received for 5\nI0415 14:20:27.080127 2583 log.go:172] (0xc0006d0a00) (5) Data frame handling\nI0415 14:20:27.080160 2583 log.go:172] (0xc000842000) (3) Data frame handling\nI0415 14:20:27.081950 2583 log.go:172] (0xc0009a4580) Data frame received for 1\nI0415 14:20:27.081968 2583 log.go:172] (0xc0006d0960) (1) Data frame handling\nI0415 14:20:27.081985 2583 log.go:172] (0xc0006d0960) (1) Data frame sent\nI0415 14:20:27.081997 2583 log.go:172] (0xc0009a4580) (0xc0006d0960) Stream removed, broadcasting: 1\nI0415 14:20:27.082145 2583 log.go:172] (0xc0009a4580) Go away received\nI0415 14:20:27.082315 2583 log.go:172] (0xc0009a4580) (0xc0006d0960) Stream removed, broadcasting: 1\nI0415 14:20:27.082332 2583 log.go:172] (0xc0009a4580) (0xc000842000) Stream removed, broadcasting: 3\nI0415 14:20:27.082340 2583 log.go:172] (0xc0009a4580) (0xc0006d0a00) Stream removed, broadcasting: 5\n" Apr 15 14:20:27.086: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:20:27.086: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:20:27.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-756 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:20:27.315: INFO: stderr: "I0415 14:20:27.213742 2607 log.go:172] (0xc0009ea4d0) (0xc000322820) Create stream\nI0415 14:20:27.213814 2607 log.go:172] (0xc0009ea4d0) (0xc000322820) Stream added, broadcasting: 1\nI0415 14:20:27.217390 2607 log.go:172] (0xc0009ea4d0) Reply frame received for 1\nI0415 14:20:27.217516 2607 log.go:172] (0xc0009ea4d0) (0xc00095a000) Create stream\nI0415 14:20:27.217568 2607 log.go:172] (0xc0009ea4d0) (0xc00095a000) Stream added, broadcasting: 3\nI0415 14:20:27.218762 2607 log.go:172] (0xc0009ea4d0) Reply frame received for 3\nI0415 14:20:27.218795 2607 log.go:172] (0xc0009ea4d0) (0xc00095a0a0) Create stream\nI0415 14:20:27.218810 2607 log.go:172] (0xc0009ea4d0) (0xc00095a0a0) Stream added, broadcasting: 5\nI0415 14:20:27.219639 2607 log.go:172] (0xc0009ea4d0) Reply frame received for 5\nI0415 14:20:27.271872 2607 log.go:172] (0xc0009ea4d0) Data frame received for 5\nI0415 14:20:27.271892 2607 log.go:172] (0xc00095a0a0) (5) Data frame handling\nI0415 14:20:27.271902 2607 log.go:172] (0xc00095a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:20:27.307560 2607 log.go:172] (0xc0009ea4d0) Data frame received for 3\nI0415 14:20:27.307603 2607 log.go:172] (0xc00095a000) (3) Data frame handling\nI0415 14:20:27.307624 2607 log.go:172] (0xc00095a000) (3) Data frame sent\nI0415 14:20:27.307735 2607 log.go:172] (0xc0009ea4d0) Data frame received for 3\nI0415 14:20:27.307769 2607 log.go:172] (0xc00095a000) (3) Data frame handling\nI0415 14:20:27.307970 2607 log.go:172] (0xc0009ea4d0) Data frame received for 5\nI0415 14:20:27.308004 2607 log.go:172] (0xc00095a0a0) (5) Data frame handling\nI0415 14:20:27.309882 2607 log.go:172] (0xc0009ea4d0) Data frame received for 1\nI0415 14:20:27.309926 2607 log.go:172] (0xc000322820) (1) Data frame handling\nI0415 14:20:27.309946 2607 log.go:172] (0xc000322820) (1) Data frame sent\nI0415 14:20:27.309963 2607 log.go:172] (0xc0009ea4d0) (0xc000322820) Stream removed, broadcasting: 1\nI0415 14:20:27.309984 2607 log.go:172] (0xc0009ea4d0) Go away received\nI0415 14:20:27.310471 2607 log.go:172] (0xc0009ea4d0) (0xc000322820) Stream removed, broadcasting: 1\nI0415 14:20:27.310495 2607 log.go:172] (0xc0009ea4d0) (0xc00095a000) Stream removed, broadcasting: 3\nI0415 14:20:27.310507 2607 log.go:172] (0xc0009ea4d0) (0xc00095a0a0) Stream removed, broadcasting: 5\n" Apr 15 14:20:27.315: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:20:27.315: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:20:27.315: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:20:27.319: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 15 14:20:37.327: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:20:37.327: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:20:37.327: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:20:37.371: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:37.371: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC }] Apr 15 14:20:37.371: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:37.371: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:37.371: INFO: Apr 15 14:20:37.371: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 14:20:38.375: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:38.375: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC }] Apr 15 14:20:38.375: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:38.375: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:38.375: INFO: Apr 15 14:20:38.375: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 14:20:39.383: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:39.383: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC }] Apr 15 14:20:39.383: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:39.383: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:39.383: INFO: Apr 15 14:20:39.383: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 14:20:40.400: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:40.400: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:19:53 +0000 UTC }] Apr 15 14:20:40.401: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:40.401: INFO: Apr 15 14:20:40.401: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 15 14:20:41.406: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 14:20:41.406: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 14:20:15 +0000 UTC }] Apr 15 14:20:41.406: INFO: Apr 15 14:20:41.406: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 14:20:42.411: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.928836793s Apr 15 14:20:43.414: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.924009614s Apr 15 14:20:44.419: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.920508935s Apr 15 14:20:45.423: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.915759868s Apr 15 14:20:46.428: INFO: Verifying statefulset ss doesn't scale past 0 for another 911.137467ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-756 Apr 15 14:20:47.433: INFO: Scaling statefulset ss to 0 Apr 15 14:20:47.443: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 15 14:20:47.446: INFO: Deleting all statefulset in ns statefulset-756 Apr 15 14:20:47.448: INFO: Scaling statefulset ss to 0 Apr 15 14:20:47.456: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:20:47.458: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:20:47.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-756" for this suite. Apr 15 14:20:53.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:20:53.554: INFO: namespace statefulset-756 deletion completed in 6.083333516s • [SLOW TEST:60.313 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:20:53.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 15 14:20:53.592: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 15 14:20:53.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:54.011: INFO: stderr: "" Apr 15 14:20:54.011: INFO: stdout: "service/redis-slave created\n" Apr 15 14:20:54.011: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 15 14:20:54.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:54.331: INFO: stderr: "" Apr 15 14:20:54.331: INFO: stdout: "service/redis-master created\n" Apr 15 14:20:54.331: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 15 14:20:54.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:54.691: INFO: stderr: "" Apr 15 14:20:54.691: INFO: stdout: "service/frontend created\n" Apr 15 14:20:54.691: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 15 14:20:54.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:54.986: INFO: stderr: "" Apr 15 14:20:54.986: INFO: stdout: "deployment.apps/frontend created\n" Apr 15 14:20:54.986: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 15 14:20:54.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:55.337: INFO: stderr: "" Apr 15 14:20:55.337: INFO: stdout: "deployment.apps/redis-master created\n" Apr 15 14:20:55.337: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 15 14:20:55.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8062' Apr 15 14:20:55.644: INFO: stderr: "" Apr 15 14:20:55.644: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 15 14:20:55.644: INFO: Waiting for all frontend pods to be Running. Apr 15 14:21:05.694: INFO: Waiting for frontend to serve content. Apr 15 14:21:05.714: INFO: Trying to add a new entry to the guestbook. Apr 15 14:21:05.731: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 15 14:21:05.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:05.880: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:05.880: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 15 14:21:05.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:06.037: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:06.037: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 15 14:21:06.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:06.148: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:06.148: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 15 14:21:06.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:06.263: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:06.263: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 15 14:21:06.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:06.346: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:06.346: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 15 14:21:06.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8062' Apr 15 14:21:06.814: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 14:21:06.814: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:21:06.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8062" for this suite. Apr 15 14:21:44.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:21:45.027: INFO: namespace kubectl-8062 deletion completed in 38.137853345s • [SLOW TEST:51.471 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:21:45.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0415 14:21:55.133253 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 14:21:55.133: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:21:55.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9309" for this suite. Apr 15 14:22:01.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:22:01.239: INFO: namespace gc-9309 deletion completed in 6.102858164s • [SLOW TEST:16.212 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:22:01.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1448 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1448 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1448 Apr 15 14:22:01.333: INFO: Found 0 stateful pods, waiting for 1 Apr 15 14:22:11.338: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 15 14:22:11.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:22:11.574: INFO: stderr: "I0415 14:22:11.462461 2880 log.go:172] (0xc0008fa420) (0xc0001fc820) Create stream\nI0415 14:22:11.462516 2880 log.go:172] (0xc0008fa420) (0xc0001fc820) Stream added, broadcasting: 1\nI0415 14:22:11.464338 2880 log.go:172] (0xc0008fa420) Reply frame received for 1\nI0415 14:22:11.464393 2880 log.go:172] (0xc0008fa420) (0xc000896000) Create stream\nI0415 14:22:11.464408 2880 log.go:172] (0xc0008fa420) (0xc000896000) Stream added, broadcasting: 3\nI0415 14:22:11.465343 2880 log.go:172] (0xc0008fa420) Reply frame received for 3\nI0415 14:22:11.465371 2880 log.go:172] (0xc0008fa420) (0xc000750000) Create stream\nI0415 14:22:11.465377 2880 log.go:172] (0xc0008fa420) (0xc000750000) Stream added, broadcasting: 5\nI0415 14:22:11.466392 2880 log.go:172] (0xc0008fa420) Reply frame received for 5\nI0415 14:22:11.540742 2880 log.go:172] (0xc0008fa420) Data frame received for 5\nI0415 14:22:11.540784 2880 log.go:172] (0xc000750000) (5) Data frame handling\nI0415 14:22:11.540811 2880 log.go:172] (0xc000750000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:22:11.566638 2880 log.go:172] (0xc0008fa420) Data frame received for 3\nI0415 14:22:11.566689 2880 log.go:172] (0xc000896000) (3) Data frame handling\nI0415 14:22:11.566760 2880 log.go:172] (0xc000896000) (3) Data frame sent\nI0415 14:22:11.567044 2880 log.go:172] (0xc0008fa420) Data frame received for 5\nI0415 14:22:11.567085 2880 log.go:172] (0xc000750000) (5) Data frame handling\nI0415 14:22:11.567116 2880 log.go:172] (0xc0008fa420) Data frame received for 3\nI0415 14:22:11.567127 2880 log.go:172] (0xc000896000) (3) Data frame handling\nI0415 14:22:11.568895 2880 log.go:172] (0xc0008fa420) Data frame received for 1\nI0415 14:22:11.568913 2880 log.go:172] (0xc0001fc820) (1) Data frame handling\nI0415 14:22:11.568929 2880 log.go:172] (0xc0001fc820) (1) Data frame sent\nI0415 14:22:11.568952 2880 log.go:172] (0xc0008fa420) (0xc0001fc820) Stream removed, broadcasting: 1\nI0415 14:22:11.569027 2880 log.go:172] (0xc0008fa420) Go away received\nI0415 14:22:11.569457 2880 log.go:172] (0xc0008fa420) (0xc0001fc820) Stream removed, broadcasting: 1\nI0415 14:22:11.569478 2880 log.go:172] (0xc0008fa420) (0xc000896000) Stream removed, broadcasting: 3\nI0415 14:22:11.569489 2880 log.go:172] (0xc0008fa420) (0xc000750000) Stream removed, broadcasting: 5\n" Apr 15 14:22:11.574: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:22:11.574: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:22:11.579: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 15 14:22:21.583: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:22:21.583: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:22:21.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999463s Apr 15 14:22:22.617: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981366908s Apr 15 14:22:23.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.976593477s Apr 15 14:22:24.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972381862s Apr 15 14:22:25.631: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.967216462s Apr 15 14:22:26.636: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.962411592s Apr 15 14:22:27.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.957445384s Apr 15 14:22:28.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.952526142s Apr 15 14:22:29.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.947747145s Apr 15 14:22:30.655: INFO: Verifying statefulset ss doesn't scale past 1 for another 942.845374ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1448 Apr 15 14:22:31.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:22:31.887: INFO: stderr: "I0415 14:22:31.795386 2900 log.go:172] (0xc000a30420) (0xc000308820) Create stream\nI0415 14:22:31.795453 2900 log.go:172] (0xc000a30420) (0xc000308820) Stream added, broadcasting: 1\nI0415 14:22:31.798091 2900 log.go:172] (0xc000a30420) Reply frame received for 1\nI0415 14:22:31.798131 2900 log.go:172] (0xc000a30420) (0xc000808000) Create stream\nI0415 14:22:31.798147 2900 log.go:172] (0xc000a30420) (0xc000808000) Stream added, broadcasting: 3\nI0415 14:22:31.799194 2900 log.go:172] (0xc000a30420) Reply frame received for 3\nI0415 14:22:31.799236 2900 log.go:172] (0xc000a30420) (0xc0003088c0) Create stream\nI0415 14:22:31.799246 2900 log.go:172] (0xc000a30420) (0xc0003088c0) Stream added, broadcasting: 5\nI0415 14:22:31.800252 2900 log.go:172] (0xc000a30420) Reply frame received for 5\nI0415 14:22:31.880512 2900 log.go:172] (0xc000a30420) Data frame received for 3\nI0415 14:22:31.880551 2900 log.go:172] (0xc000808000) (3) Data frame handling\nI0415 14:22:31.880564 2900 log.go:172] (0xc000808000) (3) Data frame sent\nI0415 14:22:31.880572 2900 log.go:172] (0xc000a30420) Data frame received for 3\nI0415 14:22:31.880578 2900 log.go:172] (0xc000808000) (3) Data frame handling\nI0415 14:22:31.880605 2900 log.go:172] (0xc000a30420) Data frame received for 5\nI0415 14:22:31.880613 2900 log.go:172] (0xc0003088c0) (5) Data frame handling\nI0415 14:22:31.880621 2900 log.go:172] (0xc0003088c0) (5) Data frame sent\nI0415 14:22:31.880628 2900 log.go:172] (0xc000a30420) Data frame received for 5\nI0415 14:22:31.880634 2900 log.go:172] (0xc0003088c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 14:22:31.882441 2900 log.go:172] (0xc000a30420) Data frame received for 1\nI0415 14:22:31.882465 2900 log.go:172] (0xc000308820) (1) Data frame handling\nI0415 14:22:31.882481 2900 log.go:172] (0xc000308820) (1) Data frame sent\nI0415 14:22:31.882497 2900 log.go:172] (0xc000a30420) (0xc000308820) Stream removed, broadcasting: 1\nI0415 14:22:31.882516 2900 log.go:172] (0xc000a30420) Go away received\nI0415 14:22:31.882876 2900 log.go:172] (0xc000a30420) (0xc000308820) Stream removed, broadcasting: 1\nI0415 14:22:31.882898 2900 log.go:172] (0xc000a30420) (0xc000808000) Stream removed, broadcasting: 3\nI0415 14:22:31.882904 2900 log.go:172] (0xc000a30420) (0xc0003088c0) Stream removed, broadcasting: 5\n" Apr 15 14:22:31.888: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:22:31.888: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:22:31.891: INFO: Found 1 stateful pods, waiting for 3 Apr 15 14:22:41.896: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 14:22:41.896: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 14:22:41.896: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 15 14:22:41.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:22:42.113: INFO: stderr: "I0415 14:22:42.026434 2920 log.go:172] (0xc000a62630) (0xc000716aa0) Create stream\nI0415 14:22:42.026487 2920 log.go:172] (0xc000a62630) (0xc000716aa0) Stream added, broadcasting: 1\nI0415 14:22:42.030610 2920 log.go:172] (0xc000a62630) Reply frame received for 1\nI0415 14:22:42.030679 2920 log.go:172] (0xc000a62630) (0xc000716320) Create stream\nI0415 14:22:42.030696 2920 log.go:172] (0xc000a62630) (0xc000716320) Stream added, broadcasting: 3\nI0415 14:22:42.031775 2920 log.go:172] (0xc000a62630) Reply frame received for 3\nI0415 14:22:42.031836 2920 log.go:172] (0xc000a62630) (0xc0007163c0) Create stream\nI0415 14:22:42.031870 2920 log.go:172] (0xc000a62630) (0xc0007163c0) Stream added, broadcasting: 5\nI0415 14:22:42.033014 2920 log.go:172] (0xc000a62630) Reply frame received for 5\nI0415 14:22:42.106168 2920 log.go:172] (0xc000a62630) Data frame received for 5\nI0415 14:22:42.106229 2920 log.go:172] (0xc0007163c0) (5) Data frame handling\nI0415 14:22:42.106249 2920 log.go:172] (0xc0007163c0) (5) Data frame sent\nI0415 14:22:42.106264 2920 log.go:172] (0xc000a62630) Data frame received for 5\nI0415 14:22:42.106276 2920 log.go:172] (0xc0007163c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:22:42.106302 2920 log.go:172] (0xc000a62630) Data frame received for 3\nI0415 14:22:42.106323 2920 log.go:172] (0xc000716320) (3) Data frame handling\nI0415 14:22:42.106345 2920 log.go:172] (0xc000716320) (3) Data frame sent\nI0415 14:22:42.106368 2920 log.go:172] (0xc000a62630) Data frame received for 3\nI0415 14:22:42.106418 2920 log.go:172] (0xc000716320) (3) Data frame handling\nI0415 14:22:42.108019 2920 log.go:172] (0xc000a62630) Data frame received for 1\nI0415 14:22:42.108059 2920 log.go:172] (0xc000716aa0) (1) Data frame handling\nI0415 14:22:42.108090 2920 log.go:172] (0xc000716aa0) (1) Data frame sent\nI0415 14:22:42.108109 2920 log.go:172] (0xc000a62630) (0xc000716aa0) Stream removed, broadcasting: 1\nI0415 14:22:42.108123 2920 log.go:172] (0xc000a62630) Go away received\nI0415 14:22:42.108494 2920 log.go:172] (0xc000a62630) (0xc000716aa0) Stream removed, broadcasting: 1\nI0415 14:22:42.108512 2920 log.go:172] (0xc000a62630) (0xc000716320) Stream removed, broadcasting: 3\nI0415 14:22:42.108521 2920 log.go:172] (0xc000a62630) (0xc0007163c0) Stream removed, broadcasting: 5\n" Apr 15 14:22:42.113: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:22:42.113: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:22:42.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:22:42.339: INFO: stderr: "I0415 14:22:42.244702 2942 log.go:172] (0xc000a00630) (0xc0005b6c80) Create stream\nI0415 14:22:42.244796 2942 log.go:172] (0xc000a00630) (0xc0005b6c80) Stream added, broadcasting: 1\nI0415 14:22:42.253980 2942 log.go:172] (0xc000a00630) Reply frame received for 1\nI0415 14:22:42.254030 2942 log.go:172] (0xc000a00630) (0xc0005b63c0) Create stream\nI0415 14:22:42.254045 2942 log.go:172] (0xc000a00630) (0xc0005b63c0) Stream added, broadcasting: 3\nI0415 14:22:42.257343 2942 log.go:172] (0xc000a00630) Reply frame received for 3\nI0415 14:22:42.257400 2942 log.go:172] (0xc000a00630) (0xc0001e8000) Create stream\nI0415 14:22:42.257415 2942 log.go:172] (0xc000a00630) (0xc0001e8000) Stream added, broadcasting: 5\nI0415 14:22:42.258943 2942 log.go:172] (0xc000a00630) Reply frame received for 5\nI0415 14:22:42.305769 2942 log.go:172] (0xc000a00630) Data frame received for 5\nI0415 14:22:42.305803 2942 log.go:172] (0xc0001e8000) (5) Data frame handling\nI0415 14:22:42.305828 2942 log.go:172] (0xc0001e8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:22:42.332035 2942 log.go:172] (0xc000a00630) Data frame received for 3\nI0415 14:22:42.332075 2942 log.go:172] (0xc0005b63c0) (3) Data frame handling\nI0415 14:22:42.332100 2942 log.go:172] (0xc0005b63c0) (3) Data frame sent\nI0415 14:22:42.332121 2942 log.go:172] (0xc000a00630) Data frame received for 3\nI0415 14:22:42.332139 2942 log.go:172] (0xc0005b63c0) (3) Data frame handling\nI0415 14:22:42.332295 2942 log.go:172] (0xc000a00630) Data frame received for 5\nI0415 14:22:42.332325 2942 log.go:172] (0xc0001e8000) (5) Data frame handling\nI0415 14:22:42.334481 2942 log.go:172] (0xc000a00630) Data frame received for 1\nI0415 14:22:42.334515 2942 log.go:172] (0xc0005b6c80) (1) Data frame handling\nI0415 14:22:42.334538 2942 log.go:172] (0xc0005b6c80) (1) Data frame sent\nI0415 14:22:42.334556 2942 log.go:172] (0xc000a00630) (0xc0005b6c80) Stream removed, broadcasting: 1\nI0415 14:22:42.334588 2942 log.go:172] (0xc000a00630) Go away received\nI0415 14:22:42.335121 2942 log.go:172] (0xc000a00630) (0xc0005b6c80) Stream removed, broadcasting: 1\nI0415 14:22:42.335144 2942 log.go:172] (0xc000a00630) (0xc0005b63c0) Stream removed, broadcasting: 3\nI0415 14:22:42.335156 2942 log.go:172] (0xc000a00630) (0xc0001e8000) Stream removed, broadcasting: 5\n" Apr 15 14:22:42.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:22:42.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:22:42.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 15 14:22:42.584: INFO: stderr: "I0415 14:22:42.468753 2963 log.go:172] (0xc000116f20) (0xc000710c80) Create stream\nI0415 14:22:42.468803 2963 log.go:172] (0xc000116f20) (0xc000710c80) Stream added, broadcasting: 1\nI0415 14:22:42.471030 2963 log.go:172] (0xc000116f20) Reply frame received for 1\nI0415 14:22:42.471094 2963 log.go:172] (0xc000116f20) (0xc000996000) Create stream\nI0415 14:22:42.471117 2963 log.go:172] (0xc000116f20) (0xc000996000) Stream added, broadcasting: 3\nI0415 14:22:42.471943 2963 log.go:172] (0xc000116f20) Reply frame received for 3\nI0415 14:22:42.471969 2963 log.go:172] (0xc000116f20) (0xc0009960a0) Create stream\nI0415 14:22:42.471977 2963 log.go:172] (0xc000116f20) (0xc0009960a0) Stream added, broadcasting: 5\nI0415 14:22:42.472867 2963 log.go:172] (0xc000116f20) Reply frame received for 5\nI0415 14:22:42.535107 2963 log.go:172] (0xc000116f20) Data frame received for 5\nI0415 14:22:42.535141 2963 log.go:172] (0xc0009960a0) (5) Data frame handling\nI0415 14:22:42.535160 2963 log.go:172] (0xc0009960a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0415 14:22:42.571972 2963 log.go:172] (0xc000116f20) Data frame received for 3\nI0415 14:22:42.572014 2963 log.go:172] (0xc000996000) (3) Data frame handling\nI0415 14:22:42.572050 2963 log.go:172] (0xc000996000) (3) Data frame sent\nI0415 14:22:42.572076 2963 log.go:172] (0xc000116f20) Data frame received for 3\nI0415 14:22:42.572137 2963 log.go:172] (0xc000996000) (3) Data frame handling\nI0415 14:22:42.572158 2963 log.go:172] (0xc000116f20) Data frame received for 5\nI0415 14:22:42.572176 2963 log.go:172] (0xc0009960a0) (5) Data frame handling\nI0415 14:22:42.574165 2963 log.go:172] (0xc000116f20) Data frame received for 1\nI0415 14:22:42.575024 2963 log.go:172] (0xc000710c80) (1) Data frame handling\nI0415 14:22:42.575078 2963 log.go:172] (0xc000710c80) (1) Data frame sent\nI0415 14:22:42.575114 2963 log.go:172] (0xc000116f20) (0xc000710c80) Stream removed, broadcasting: 1\nI0415 14:22:42.577993 2963 log.go:172] (0xc000116f20) Go away received\nI0415 14:22:42.578683 2963 log.go:172] (0xc000116f20) (0xc000710c80) Stream removed, broadcasting: 1\nI0415 14:22:42.578723 2963 log.go:172] (0xc000116f20) (0xc000996000) Stream removed, broadcasting: 3\nI0415 14:22:42.578745 2963 log.go:172] (0xc000116f20) (0xc0009960a0) Stream removed, broadcasting: 5\n" Apr 15 14:22:42.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 15 14:22:42.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 15 14:22:42.584: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:22:42.588: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 15 14:22:52.596: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:22:52.596: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:22:52.596: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 15 14:22:52.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999688s Apr 15 14:22:53.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991790055s Apr 15 14:22:54.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986442887s Apr 15 14:22:55.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981146317s Apr 15 14:22:56.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976454017s Apr 15 14:22:57.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970970499s Apr 15 14:22:58.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965961878s Apr 15 14:22:59.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960795374s Apr 15 14:23:00.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955161855s Apr 15 14:23:01.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.007887ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1448 Apr 15 14:23:02.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:23:02.889: INFO: stderr: "I0415 14:23:02.792823 2984 log.go:172] (0xc000a66000) (0xc0005503c0) Create stream\nI0415 14:23:02.792908 2984 log.go:172] (0xc000a66000) (0xc0005503c0) Stream added, broadcasting: 1\nI0415 14:23:02.796600 2984 log.go:172] (0xc000a66000) Reply frame received for 1\nI0415 14:23:02.796649 2984 log.go:172] (0xc000a66000) (0xc0001e4000) Create stream\nI0415 14:23:02.796665 2984 log.go:172] (0xc000a66000) (0xc0001e4000) Stream added, broadcasting: 3\nI0415 14:23:02.798369 2984 log.go:172] (0xc000a66000) Reply frame received for 3\nI0415 14:23:02.798420 2984 log.go:172] (0xc000a66000) (0xc000222000) Create stream\nI0415 14:23:02.798433 2984 log.go:172] (0xc000a66000) (0xc000222000) Stream added, broadcasting: 5\nI0415 14:23:02.799547 2984 log.go:172] (0xc000a66000) Reply frame received for 5\nI0415 14:23:02.882035 2984 log.go:172] (0xc000a66000) Data frame received for 5\nI0415 14:23:02.882116 2984 log.go:172] (0xc000222000) (5) Data frame handling\nI0415 14:23:02.882142 2984 log.go:172] (0xc000222000) (5) Data frame sent\nI0415 14:23:02.882159 2984 log.go:172] (0xc000a66000) Data frame received for 5\nI0415 14:23:02.882175 2984 log.go:172] (0xc000222000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 14:23:02.882214 2984 log.go:172] (0xc000a66000) Data frame received for 3\nI0415 14:23:02.882248 2984 log.go:172] (0xc0001e4000) (3) Data frame handling\nI0415 14:23:02.882272 2984 log.go:172] (0xc0001e4000) (3) Data frame sent\nI0415 14:23:02.882286 2984 log.go:172] (0xc000a66000) Data frame received for 3\nI0415 14:23:02.882314 2984 log.go:172] (0xc0001e4000) (3) Data frame handling\nI0415 14:23:02.883777 2984 log.go:172] (0xc000a66000) Data frame received for 1\nI0415 14:23:02.883871 2984 log.go:172] (0xc0005503c0) (1) Data frame handling\nI0415 14:23:02.883932 2984 log.go:172] (0xc0005503c0) (1) Data frame sent\nI0415 14:23:02.883978 2984 log.go:172] (0xc000a66000) (0xc0005503c0) Stream removed, broadcasting: 1\nI0415 14:23:02.884007 2984 log.go:172] (0xc000a66000) Go away received\nI0415 14:23:02.884349 2984 log.go:172] (0xc000a66000) (0xc0005503c0) Stream removed, broadcasting: 1\nI0415 14:23:02.884371 2984 log.go:172] (0xc000a66000) (0xc0001e4000) Stream removed, broadcasting: 3\nI0415 14:23:02.884382 2984 log.go:172] (0xc000a66000) (0xc000222000) Stream removed, broadcasting: 5\n" Apr 15 14:23:02.890: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:23:02.890: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:23:02.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:23:03.126: INFO: stderr: "I0415 14:23:03.054815 3006 log.go:172] (0xc000952420) (0xc0004a06e0) Create stream\nI0415 14:23:03.054887 3006 log.go:172] (0xc000952420) (0xc0004a06e0) Stream added, broadcasting: 1\nI0415 14:23:03.057563 3006 log.go:172] (0xc000952420) Reply frame received for 1\nI0415 14:23:03.057601 3006 log.go:172] (0xc000952420) (0xc0004a0780) Create stream\nI0415 14:23:03.057613 3006 log.go:172] (0xc000952420) (0xc0004a0780) Stream added, broadcasting: 3\nI0415 14:23:03.058615 3006 log.go:172] (0xc000952420) Reply frame received for 3\nI0415 14:23:03.058686 3006 log.go:172] (0xc000952420) (0xc0008ba000) Create stream\nI0415 14:23:03.058712 3006 log.go:172] (0xc000952420) (0xc0008ba000) Stream added, broadcasting: 5\nI0415 14:23:03.059593 3006 log.go:172] (0xc000952420) Reply frame received for 5\nI0415 14:23:03.120220 3006 log.go:172] (0xc000952420) Data frame received for 5\nI0415 14:23:03.120254 3006 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0415 14:23:03.120263 3006 log.go:172] (0xc0008ba000) (5) Data frame sent\nI0415 14:23:03.120268 3006 log.go:172] (0xc000952420) Data frame received for 5\nI0415 14:23:03.120273 3006 log.go:172] (0xc0008ba000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 14:23:03.120355 3006 log.go:172] (0xc000952420) Data frame received for 3\nI0415 14:23:03.120397 3006 log.go:172] (0xc0004a0780) (3) Data frame handling\nI0415 14:23:03.120421 3006 log.go:172] (0xc0004a0780) (3) Data frame sent\nI0415 14:23:03.120440 3006 log.go:172] (0xc000952420) Data frame received for 3\nI0415 14:23:03.120477 3006 log.go:172] (0xc0004a0780) (3) Data frame handling\nI0415 14:23:03.122142 3006 log.go:172] (0xc000952420) Data frame received for 1\nI0415 14:23:03.122153 3006 log.go:172] (0xc0004a06e0) (1) Data frame handling\nI0415 14:23:03.122158 3006 log.go:172] (0xc0004a06e0) (1) Data frame sent\nI0415 14:23:03.122232 3006 log.go:172] (0xc000952420) (0xc0004a06e0) Stream removed, broadcasting: 1\nI0415 14:23:03.122249 3006 log.go:172] (0xc000952420) Go away received\nI0415 14:23:03.122696 3006 log.go:172] (0xc000952420) (0xc0004a06e0) Stream removed, broadcasting: 1\nI0415 14:23:03.122724 3006 log.go:172] (0xc000952420) (0xc0004a0780) Stream removed, broadcasting: 3\nI0415 14:23:03.122744 3006 log.go:172] (0xc000952420) (0xc0008ba000) Stream removed, broadcasting: 5\n" Apr 15 14:23:03.126: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:23:03.126: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:23:03.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 15 14:23:03.331: INFO: stderr: "I0415 14:23:03.255567 3024 log.go:172] (0xc0009f0420) (0xc000294820) Create stream\nI0415 14:23:03.255610 3024 log.go:172] (0xc0009f0420) (0xc000294820) Stream added, broadcasting: 1\nI0415 14:23:03.258232 3024 log.go:172] (0xc0009f0420) Reply frame received for 1\nI0415 14:23:03.258551 3024 log.go:172] (0xc0009f0420) (0xc0009ea000) Create stream\nI0415 14:23:03.258580 3024 log.go:172] (0xc0009f0420) (0xc0009ea000) Stream added, broadcasting: 3\nI0415 14:23:03.263526 3024 log.go:172] (0xc0009f0420) Reply frame received for 3\nI0415 14:23:03.263725 3024 log.go:172] (0xc0009f0420) (0xc000a16000) Create stream\nI0415 14:23:03.263751 3024 log.go:172] (0xc0009f0420) (0xc000a16000) Stream added, broadcasting: 5\nI0415 14:23:03.264783 3024 log.go:172] (0xc0009f0420) Reply frame received for 5\nI0415 14:23:03.323337 3024 log.go:172] (0xc0009f0420) Data frame received for 3\nI0415 14:23:03.323384 3024 log.go:172] (0xc0009f0420) Data frame received for 5\nI0415 14:23:03.323420 3024 log.go:172] (0xc000a16000) (5) Data frame handling\nI0415 14:23:03.323443 3024 log.go:172] (0xc000a16000) (5) Data frame sent\nI0415 14:23:03.323462 3024 log.go:172] (0xc0009f0420) Data frame received for 5\nI0415 14:23:03.323477 3024 log.go:172] (0xc000a16000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0415 14:23:03.323538 3024 log.go:172] (0xc0009ea000) (3) Data frame handling\nI0415 14:23:03.323587 3024 log.go:172] (0xc0009ea000) (3) Data frame sent\nI0415 14:23:03.323614 3024 log.go:172] (0xc0009f0420) Data frame received for 3\nI0415 14:23:03.323640 3024 log.go:172] (0xc0009ea000) (3) Data frame handling\nI0415 14:23:03.325497 3024 log.go:172] (0xc0009f0420) Data frame received for 1\nI0415 14:23:03.325528 3024 log.go:172] (0xc000294820) (1) Data frame handling\nI0415 14:23:03.325555 3024 log.go:172] (0xc000294820) (1) Data frame sent\nI0415 14:23:03.325598 3024 log.go:172] (0xc0009f0420) (0xc000294820) Stream removed, broadcasting: 1\nI0415 14:23:03.325778 3024 log.go:172] (0xc0009f0420) Go away received\nI0415 14:23:03.326073 3024 log.go:172] (0xc0009f0420) (0xc000294820) Stream removed, broadcasting: 1\nI0415 14:23:03.326102 3024 log.go:172] (0xc0009f0420) (0xc0009ea000) Stream removed, broadcasting: 3\nI0415 14:23:03.326118 3024 log.go:172] (0xc0009f0420) (0xc000a16000) Stream removed, broadcasting: 5\n" Apr 15 14:23:03.331: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 15 14:23:03.331: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 15 14:23:03.331: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 15 14:23:23.348: INFO: Deleting all statefulset in ns statefulset-1448 Apr 15 14:23:23.351: INFO: Scaling statefulset ss to 0 Apr 15 14:23:23.359: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 14:23:23.362: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:23:23.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1448" for this suite. Apr 15 14:23:29.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:23:29.481: INFO: namespace statefulset-1448 deletion completed in 6.101396058s • [SLOW TEST:88.241 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:23:29.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 14:23:29.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7351' Apr 15 14:23:29.662: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 15 14:23:29.662: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 15 14:23:29.702: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bp8f4] Apr 15 14:23:29.702: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bp8f4" in namespace "kubectl-7351" to be "running and ready" Apr 15 14:23:29.724: INFO: Pod "e2e-test-nginx-rc-bp8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.994382ms Apr 15 14:23:31.810: INFO: Pod "e2e-test-nginx-rc-bp8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108290479s Apr 15 14:23:33.814: INFO: Pod "e2e-test-nginx-rc-bp8f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.111967969s Apr 15 14:23:33.814: INFO: Pod "e2e-test-nginx-rc-bp8f4" satisfied condition "running and ready" Apr 15 14:23:33.814: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bp8f4] Apr 15 14:23:33.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7351' Apr 15 14:23:33.934: INFO: stderr: "" Apr 15 14:23:33.934: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 15 14:23:33.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7351' Apr 15 14:23:34.027: INFO: stderr: "" Apr 15 14:23:34.027: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:23:34.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7351" for this suite. Apr 15 14:23:56.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:23:56.108: INFO: namespace kubectl-7351 deletion completed in 22.078530013s • [SLOW TEST:26.627 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:23:56.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fb95d12b-a18f-4210-950a-2cf482b2c57d STEP: Creating a pod to test consume configMaps Apr 15 14:23:56.166: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2" in namespace "projected-8722" to be "success or failure" Apr 15 14:23:56.187: INFO: Pod "pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.559005ms Apr 15 14:23:58.192: INFO: Pod "pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025903248s Apr 15 14:24:00.196: INFO: Pod "pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030011693s STEP: Saw pod success Apr 15 14:24:00.196: INFO: Pod "pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2" satisfied condition "success or failure" Apr 15 14:24:00.199: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2 container projected-configmap-volume-test: STEP: delete the pod Apr 15 14:24:00.305: INFO: Waiting for pod pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2 to disappear Apr 15 14:24:00.321: INFO: Pod pod-projected-configmaps-a3c8f6f7-87b2-4585-8d3d-7ef5bfcd84d2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:24:00.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8722" for this suite. Apr 15 14:24:06.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:24:06.415: INFO: namespace projected-8722 deletion completed in 6.090276037s • [SLOW TEST:10.306 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:24:06.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 15 14:24:06.467: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 15 14:24:07.252: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 15 14:24:09.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557447, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557447, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557447, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557447, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 14:24:12.037: INFO: Waited 634.439169ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:24:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3673" for this suite. Apr 15 14:24:18.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:24:18.652: INFO: namespace aggregator-3673 deletion completed in 6.183247993s • [SLOW TEST:12.237 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:24:18.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:24:18.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a" in namespace "downward-api-3684" to be "success or failure" Apr 15 14:24:18.757: INFO: Pod "downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.68426ms Apr 15 14:24:20.761: INFO: Pod "downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034838252s Apr 15 14:24:22.765: INFO: Pod "downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03836634s STEP: Saw pod success Apr 15 14:24:22.765: INFO: Pod "downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a" satisfied condition "success or failure" Apr 15 14:24:22.768: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a container client-container: STEP: delete the pod Apr 15 14:24:22.805: INFO: Waiting for pod downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a to disappear Apr 15 14:24:22.819: INFO: Pod downwardapi-volume-31fd9002-6553-492f-ba5c-d75aca2cb95a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:24:22.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3684" for this suite. Apr 15 14:24:28.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:24:28.921: INFO: namespace downward-api-3684 deletion completed in 6.097963764s • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:24:28.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 15 14:24:28.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 15 14:24:28.993: INFO: Waiting for terminating namespaces to be deleted... Apr 15 14:24:28.995: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 15 14:24:29.001: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.001: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 14:24:29.001: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.001: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 14:24:29.001: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 15 14:24:29.007: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.007: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 14:24:29.007: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.007: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 14:24:29.007: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.007: INFO: Container coredns ready: true, restart count 0 Apr 15 14:24:29.007: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 15 14:24:29.007: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16060452059c157a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:24:30.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-311" for this suite. Apr 15 14:24:36.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:24:36.150: INFO: namespace sched-pred-311 deletion completed in 6.118748434s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:24:36.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 15 14:24:40.756: INFO: Successfully updated pod "labelsupdatede6404c6-8daf-4468-898d-ae749b507fdd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:24:42.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2276" for this suite. Apr 15 14:25:04.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:25:04.889: INFO: namespace projected-2276 deletion completed in 22.083937153s • [SLOW TEST:28.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:25:04.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-122224b3-48ba-48b2-abc3-2149e3afb343 STEP: Creating a pod to test consume configMaps Apr 15 14:25:05.026: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d" in namespace "configmap-9198" to be "success or failure" Apr 15 14:25:05.043: INFO: Pod "pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.488907ms Apr 15 14:25:07.099: INFO: Pod "pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072313465s Apr 15 14:25:09.102: INFO: Pod "pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076180652s STEP: Saw pod success Apr 15 14:25:09.103: INFO: Pod "pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d" satisfied condition "success or failure" Apr 15 14:25:09.105: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d container configmap-volume-test: STEP: delete the pod Apr 15 14:25:09.127: INFO: Waiting for pod pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d to disappear Apr 15 14:25:09.131: INFO: Pod pod-configmaps-ca563fad-57d8-451e-8753-da378b036e6d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:25:09.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9198" for this suite. Apr 15 14:25:15.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:25:15.236: INFO: namespace configmap-9198 deletion completed in 6.102804433s • [SLOW TEST:10.347 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:25:15.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 15 14:25:15.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6983' Apr 15 14:25:15.402: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 15 14:25:15.402: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 15 14:25:17.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6983' Apr 15 14:25:17.545: INFO: stderr: "" Apr 15 14:25:17.545: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:25:17.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6983" for this suite. Apr 15 14:25:39.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:25:39.667: INFO: namespace kubectl-6983 deletion completed in 22.118849733s • [SLOW TEST:24.431 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:25:39.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:25:39.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0" in namespace "projected-4008" to be "success or failure" Apr 15 14:25:39.746: INFO: Pod "downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006994ms Apr 15 14:25:41.776: INFO: Pod "downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045702457s Apr 15 14:25:43.781: INFO: Pod "downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050410813s STEP: Saw pod success Apr 15 14:25:43.781: INFO: Pod "downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0" satisfied condition "success or failure" Apr 15 14:25:43.783: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0 container client-container: STEP: delete the pod Apr 15 14:25:43.819: INFO: Waiting for pod downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0 to disappear Apr 15 14:25:43.832: INFO: Pod downwardapi-volume-abec892f-5792-45cb-bba0-c70cf86edbb0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:25:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4008" for this suite. Apr 15 14:25:49.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:25:49.922: INFO: namespace projected-4008 deletion completed in 6.086524563s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:25:49.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:25:49.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a" in namespace "downward-api-2768" to be "success or failure" Apr 15 14:25:49.982: INFO: Pod "downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.763189ms Apr 15 14:25:52.007: INFO: Pod "downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030438381s Apr 15 14:25:54.027: INFO: Pod "downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051145514s STEP: Saw pod success Apr 15 14:25:54.028: INFO: Pod "downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a" satisfied condition "success or failure" Apr 15 14:25:54.031: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a container client-container: STEP: delete the pod Apr 15 14:25:54.050: INFO: Waiting for pod downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a to disappear Apr 15 14:25:54.054: INFO: Pod downwardapi-volume-e81a33f7-bc3f-4a46-9f00-70d605400c5a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:25:54.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2768" for this suite. Apr 15 14:26:00.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:26:00.162: INFO: namespace downward-api-2768 deletion completed in 6.104435597s • [SLOW TEST:10.239 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:26:00.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 15 14:26:00.273: INFO: Waiting up to 5m0s for pod "downward-api-c726b9cb-19c0-4724-acd1-69084821764c" in namespace "downward-api-6455" to be "success or failure" Apr 15 14:26:00.290: INFO: Pod "downward-api-c726b9cb-19c0-4724-acd1-69084821764c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.934428ms Apr 15 14:26:02.294: INFO: Pod "downward-api-c726b9cb-19c0-4724-acd1-69084821764c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020704929s Apr 15 14:26:04.298: INFO: Pod "downward-api-c726b9cb-19c0-4724-acd1-69084821764c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024827992s STEP: Saw pod success Apr 15 14:26:04.298: INFO: Pod "downward-api-c726b9cb-19c0-4724-acd1-69084821764c" satisfied condition "success or failure" Apr 15 14:26:04.321: INFO: Trying to get logs from node iruya-worker pod downward-api-c726b9cb-19c0-4724-acd1-69084821764c container dapi-container: STEP: delete the pod Apr 15 14:26:04.362: INFO: Waiting for pod downward-api-c726b9cb-19c0-4724-acd1-69084821764c to disappear Apr 15 14:26:04.365: INFO: Pod downward-api-c726b9cb-19c0-4724-acd1-69084821764c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:26:04.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6455" for this suite. Apr 15 14:26:10.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:26:10.472: INFO: namespace downward-api-6455 deletion completed in 6.104626371s • [SLOW TEST:10.310 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:26:10.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 15 14:26:10.557: INFO: Waiting up to 5m0s for pod "var-expansion-1f5ce383-e680-4a60-b032-4974430ac051" in namespace "var-expansion-9610" to be "success or failure" Apr 15 14:26:10.578: INFO: Pod "var-expansion-1f5ce383-e680-4a60-b032-4974430ac051": Phase="Pending", Reason="", readiness=false. Elapsed: 20.5283ms Apr 15 14:26:12.597: INFO: Pod "var-expansion-1f5ce383-e680-4a60-b032-4974430ac051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039827974s Apr 15 14:26:14.602: INFO: Pod "var-expansion-1f5ce383-e680-4a60-b032-4974430ac051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04464774s STEP: Saw pod success Apr 15 14:26:14.602: INFO: Pod "var-expansion-1f5ce383-e680-4a60-b032-4974430ac051" satisfied condition "success or failure" Apr 15 14:26:14.605: INFO: Trying to get logs from node iruya-worker pod var-expansion-1f5ce383-e680-4a60-b032-4974430ac051 container dapi-container: STEP: delete the pod Apr 15 14:26:14.627: INFO: Waiting for pod var-expansion-1f5ce383-e680-4a60-b032-4974430ac051 to disappear Apr 15 14:26:14.655: INFO: Pod var-expansion-1f5ce383-e680-4a60-b032-4974430ac051 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:26:14.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9610" for this suite. Apr 15 14:26:20.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:26:20.741: INFO: namespace var-expansion-9610 deletion completed in 6.081888958s • [SLOW TEST:10.268 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:26:20.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-7bff8e23-af45-4867-a498-e90b81a0523e STEP: Creating secret with name s-test-opt-upd-2f173126-7406-4616-999e-615af6bb4238 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7bff8e23-af45-4867-a498-e90b81a0523e STEP: Updating secret s-test-opt-upd-2f173126-7406-4616-999e-615af6bb4238 STEP: Creating secret with name s-test-opt-create-3b7962ba-eaab-4952-9d91-fcae7f1f4ceb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:27:39.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3169" for this suite. Apr 15 14:28:01.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:28:01.376: INFO: namespace projected-3169 deletion completed in 22.094471765s • [SLOW TEST:100.635 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:28:01.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:28:01.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 15 14:28:01.570: INFO: stderr: "" Apr 15 14:28:01.570: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:28:01.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5943" for this suite. Apr 15 14:28:07.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:28:07.659: INFO: namespace kubectl-5943 deletion completed in 6.084581935s • [SLOW TEST:6.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:28:07.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 15 14:28:07.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317" in namespace "projected-797" to be "success or failure" Apr 15 14:28:07.724: INFO: Pod "downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317": Phase="Pending", Reason="", readiness=false. Elapsed: 3.182759ms Apr 15 14:28:09.728: INFO: Pod "downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007565612s Apr 15 14:28:11.731: INFO: Pod "downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010909955s STEP: Saw pod success Apr 15 14:28:11.731: INFO: Pod "downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317" satisfied condition "success or failure" Apr 15 14:28:11.734: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317 container client-container: STEP: delete the pod Apr 15 14:28:11.764: INFO: Waiting for pod downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317 to disappear Apr 15 14:28:11.773: INFO: Pod downwardapi-volume-af99ddb5-54f9-468a-8bf8-6259a83a0317 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:28:11.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-797" for this suite. Apr 15 14:28:17.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:28:17.850: INFO: namespace projected-797 deletion completed in 6.072461054s • [SLOW TEST:10.190 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:28:17.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 15 14:28:17.897: INFO: PodSpec: initContainers in spec.initContainers Apr 15 14:29:06.929: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c3b712ae-3b7f-4f09-ac5b-808fd2f95856", GenerateName:"", Namespace:"init-container-2607", SelfLink:"/api/v1/namespaces/init-container-2607/pods/pod-init-c3b712ae-3b7f-4f09-ac5b-808fd2f95856", UID:"60b41ce1-8da3-485c-a2d7-71f44d98d663", ResourceVersion:"5578864", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722557697, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"897661549"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n2gwc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00297a1c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n2gwc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n2gwc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n2gwc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002766278), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023a4180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002766300)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002766320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002766328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00276632c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557698, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557698, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557698, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722557697, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.172", StartTime:(*v1.Time)(0xc00059bc40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001756af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001756b60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://98d981571676b6e26c64f6c5ba63cd2cf9127c71e93e21fdf6de7adf494b57e0"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002956020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002956000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:29:06.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2607" for this suite. Apr 15 14:29:29.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:29:29.128: INFO: namespace init-container-2607 deletion completed in 22.098023924s • [SLOW TEST:71.278 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:29:29.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9207 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 14:29:29.177: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 15 14:29:57.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.126:8080/dial?request=hostName&protocol=http&host=10.244.2.173&port=8080&tries=1'] Namespace:pod-network-test-9207 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 14:29:57.282: INFO: >>> kubeConfig: /root/.kube/config I0415 14:29:57.314330 6 log.go:172] (0xc000ecba20) (0xc002994320) Create stream I0415 14:29:57.314370 6 log.go:172] (0xc000ecba20) (0xc002994320) Stream added, broadcasting: 1 I0415 14:29:57.316180 6 log.go:172] (0xc000ecba20) Reply frame received for 1 I0415 14:29:57.316222 6 log.go:172] (0xc000ecba20) (0xc003012000) Create stream I0415 14:29:57.316235 6 log.go:172] (0xc000ecba20) (0xc003012000) Stream added, broadcasting: 3 I0415 14:29:57.317058 6 log.go:172] (0xc000ecba20) Reply frame received for 3 I0415 14:29:57.317106 6 log.go:172] (0xc000ecba20) (0xc002ee1f40) Create stream I0415 14:29:57.317243 6 log.go:172] (0xc000ecba20) (0xc002ee1f40) Stream added, broadcasting: 5 I0415 14:29:57.318085 6 log.go:172] (0xc000ecba20) Reply frame received for 5 I0415 14:29:57.446976 6 log.go:172] (0xc000ecba20) Data frame received for 3 I0415 14:29:57.447021 6 log.go:172] (0xc003012000) (3) Data frame handling I0415 14:29:57.447049 6 log.go:172] (0xc003012000) (3) Data frame sent I0415 14:29:57.447477 6 log.go:172] (0xc000ecba20) Data frame received for 5 I0415 14:29:57.447565 6 log.go:172] (0xc002ee1f40) (5) Data frame handling I0415 14:29:57.447610 6 log.go:172] (0xc000ecba20) Data frame received for 3 I0415 14:29:57.447632 6 log.go:172] (0xc003012000) (3) Data frame handling I0415 14:29:57.449459 6 log.go:172] (0xc000ecba20) Data frame received for 1 I0415 14:29:57.449475 6 log.go:172] (0xc002994320) (1) Data frame handling I0415 14:29:57.449484 6 log.go:172] (0xc002994320) (1) Data frame sent I0415 14:29:57.449493 6 log.go:172] (0xc000ecba20) (0xc002994320) Stream removed, broadcasting: 1 I0415 14:29:57.449575 6 log.go:172] (0xc000ecba20) (0xc002994320) Stream removed, broadcasting: 1 I0415 14:29:57.449586 6 log.go:172] (0xc000ecba20) (0xc003012000) Stream removed, broadcasting: 3 I0415 14:29:57.449594 6 log.go:172] (0xc000ecba20) (0xc002ee1f40) Stream removed, broadcasting: 5 I0415 14:29:57.449645 6 log.go:172] (0xc000ecba20) Go away received Apr 15 14:29:57.449: INFO: Waiting for endpoints: map[] Apr 15 14:29:57.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.126:8080/dial?request=hostName&protocol=http&host=10.244.1.125&port=8080&tries=1'] Namespace:pod-network-test-9207 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 14:29:57.452: INFO: >>> kubeConfig: /root/.kube/config I0415 14:29:57.477491 6 log.go:172] (0xc001b9a160) (0xc0029945a0) Create stream I0415 14:29:57.477516 6 log.go:172] (0xc001b9a160) (0xc0029945a0) Stream added, broadcasting: 1 I0415 14:29:57.479702 6 log.go:172] (0xc001b9a160) Reply frame received for 1 I0415 14:29:57.479742 6 log.go:172] (0xc001b9a160) (0xc0015c6000) Create stream I0415 14:29:57.479757 6 log.go:172] (0xc001b9a160) (0xc0015c6000) Stream added, broadcasting: 3 I0415 14:29:57.480538 6 log.go:172] (0xc001b9a160) Reply frame received for 3 I0415 14:29:57.480591 6 log.go:172] (0xc001b9a160) (0xc003012140) Create stream I0415 14:29:57.480614 6 log.go:172] (0xc001b9a160) (0xc003012140) Stream added, broadcasting: 5 I0415 14:29:57.481555 6 log.go:172] (0xc001b9a160) Reply frame received for 5 I0415 14:29:57.548379 6 log.go:172] (0xc001b9a160) Data frame received for 3 I0415 14:29:57.548429 6 log.go:172] (0xc0015c6000) (3) Data frame handling I0415 14:29:57.548477 6 log.go:172] (0xc0015c6000) (3) Data frame sent I0415 14:29:57.549016 6 log.go:172] (0xc001b9a160) Data frame received for 3 I0415 14:29:57.549044 6 log.go:172] (0xc0015c6000) (3) Data frame handling I0415 14:29:57.549390 6 log.go:172] (0xc001b9a160) Data frame received for 5 I0415 14:29:57.549421 6 log.go:172] (0xc003012140) (5) Data frame handling I0415 14:29:57.551213 6 log.go:172] (0xc001b9a160) Data frame received for 1 I0415 14:29:57.551235 6 log.go:172] (0xc0029945a0) (1) Data frame handling I0415 14:29:57.551252 6 log.go:172] (0xc0029945a0) (1) Data frame sent I0415 14:29:57.551290 6 log.go:172] (0xc001b9a160) (0xc0029945a0) Stream removed, broadcasting: 1 I0415 14:29:57.551358 6 log.go:172] (0xc001b9a160) Go away received I0415 14:29:57.551436 6 log.go:172] (0xc001b9a160) (0xc0029945a0) Stream removed, broadcasting: 1 I0415 14:29:57.551470 6 log.go:172] (0xc001b9a160) (0xc0015c6000) Stream removed, broadcasting: 3 I0415 14:29:57.551494 6 log.go:172] (0xc001b9a160) (0xc003012140) Stream removed, broadcasting: 5 Apr 15 14:29:57.551: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:29:57.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9207" for this suite. Apr 15 14:30:21.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:30:21.656: INFO: namespace pod-network-test-9207 deletion completed in 24.100320707s • [SLOW TEST:52.527 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:30:21.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 15 14:30:21.683: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:30:28.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3958" for this suite. Apr 15 14:30:50.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:30:50.898: INFO: namespace init-container-3958 deletion completed in 22.102950929s • [SLOW TEST:29.242 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:30:50.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 15 14:30:51.027: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d1f83545-7f5b-41d0-97d5-a948790979f8", Controller:(*bool)(0xc002201ef2), BlockOwnerDeletion:(*bool)(0xc002201ef3)}} Apr 15 14:30:51.098: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a438f022-fe2c-4cba-b81a-aab35890d9c5", Controller:(*bool)(0xc000d018d2), BlockOwnerDeletion:(*bool)(0xc000d018d3)}} Apr 15 14:30:51.106: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"74340bfb-e552-4566-a194-ebf3ca571623", Controller:(*bool)(0xc0023c76fa), BlockOwnerDeletion:(*bool)(0xc0023c76fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:30:56.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1048" for this suite. Apr 15 14:31:02.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:31:02.255: INFO: namespace gc-1048 deletion completed in 6.09454984s • [SLOW TEST:11.357 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:31:02.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 15 14:31:02.306: INFO: Waiting up to 5m0s for pod "pod-6175a1dd-7630-445c-bdb4-8ad888980f95" in namespace "emptydir-8251" to be "success or failure" Apr 15 14:31:02.324: INFO: Pod "pod-6175a1dd-7630-445c-bdb4-8ad888980f95": Phase="Pending", Reason="", readiness=false. Elapsed: 17.57863ms Apr 15 14:31:04.328: INFO: Pod "pod-6175a1dd-7630-445c-bdb4-8ad888980f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021577937s Apr 15 14:31:06.332: INFO: Pod "pod-6175a1dd-7630-445c-bdb4-8ad888980f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026232334s STEP: Saw pod success Apr 15 14:31:06.333: INFO: Pod "pod-6175a1dd-7630-445c-bdb4-8ad888980f95" satisfied condition "success or failure" Apr 15 14:31:06.336: INFO: Trying to get logs from node iruya-worker2 pod pod-6175a1dd-7630-445c-bdb4-8ad888980f95 container test-container: STEP: delete the pod Apr 15 14:31:06.365: INFO: Waiting for pod pod-6175a1dd-7630-445c-bdb4-8ad888980f95 to disappear Apr 15 14:31:06.376: INFO: Pod pod-6175a1dd-7630-445c-bdb4-8ad888980f95 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:31:06.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8251" for this suite. Apr 15 14:31:12.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:31:12.490: INFO: namespace emptydir-8251 deletion completed in 6.110146824s • [SLOW TEST:10.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 15 14:31:12.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-871fa6f9-f045-4045-85f6-6f279097b968 in namespace container-probe-6929 Apr 15 14:31:16.573: INFO: Started pod test-webserver-871fa6f9-f045-4045-85f6-6f279097b968 in namespace container-probe-6929 STEP: checking the pod's current state and verifying that restartCount is present Apr 15 14:31:16.576: INFO: Initial restart count of pod test-webserver-871fa6f9-f045-4045-85f6-6f279097b968 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 15 14:35:17.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6929" for this suite. Apr 15 14:35:23.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 15 14:35:23.260: INFO: namespace container-probe-6929 deletion completed in 6.095081574s • [SLOW TEST:250.770 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSApr 15 14:35:23.260: INFO: Running AfterSuite actions on all nodes Apr 15 14:35:23.260: INFO: Running AfterSuite actions on node 1 Apr 15 14:35:23.260: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5978.848 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS