I0220 12:55:58.713194 8 e2e.go:243] Starting e2e run "e3751b31-7a0a-4595-8952-e717bf6923db" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582203357 - Will randomize all specs Will run 215 of 4412 specs Feb 20 12:55:58.916: INFO: >>> kubeConfig: /root/.kube/config Feb 20 12:55:58.919: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 20 12:55:58.949: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 20 12:55:58.975: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 20 12:55:58.975: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 20 12:55:58.975: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 20 12:55:58.985: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 20 12:55:58.985: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 20 12:55:58.985: INFO: e2e test version: v1.15.7 Feb 20 12:55:58.986: INFO: kube-apiserver version: v1.15.1 SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:55:58.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Feb 20 12:55:59.113: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 20 12:55:59.131: INFO: Waiting up to 5m0s for pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e" in namespace "emptydir-25" to be "success or failure" Feb 20 12:55:59.143: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.850629ms Feb 20 12:56:01.153: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021389349s Feb 20 12:56:03.160: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028674612s Feb 20 12:56:05.181: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049638819s Feb 20 12:56:07.190: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058396964s STEP: Saw pod success Feb 20 12:56:07.190: INFO: Pod "pod-2548d44c-d4c0-476e-9420-aebf1b30b82e" satisfied condition "success or failure" Feb 20 12:56:07.197: INFO: Trying to get logs from node iruya-node pod pod-2548d44c-d4c0-476e-9420-aebf1b30b82e container test-container: STEP: delete the pod Feb 20 12:56:07.247: INFO: Waiting for pod pod-2548d44c-d4c0-476e-9420-aebf1b30b82e to disappear Feb 20 12:56:07.258: INFO: Pod pod-2548d44c-d4c0-476e-9420-aebf1b30b82e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:56:07.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-25" for this suite. Feb 20 12:56:13.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:56:13.388: INFO: namespace emptydir-25 deletion completed in 6.124345011s • [SLOW TEST:14.401 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:56:13.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-2d1063f4-4e91-44a4-92f2-0b704390d42a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2d1063f4-4e91-44a4-92f2-0b704390d42a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:56:23.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8507" for this suite. Feb 20 12:56:45.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:56:45.976: INFO: namespace configmap-8507 deletion completed in 22.204995724s • [SLOW TEST:32.588 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:56:45.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 12:56:46.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3758' Feb 20 12:56:48.317: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 12:56:48.317: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Feb 20 12:56:48.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3758' Feb 20 12:56:48.585: INFO: stderr: "" Feb 20 12:56:48.585: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:56:48.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3758" for this suite. Feb 20 12:57:10.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:57:10.767: INFO: namespace kubectl-3758 deletion completed in 22.176493542s • [SLOW TEST:24.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:57:10.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 20 12:57:19.321: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:57:19.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5429" for this suite. Feb 20 12:57:25.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:57:25.533: INFO: namespace container-runtime-5429 deletion completed in 6.163910814s • [SLOW TEST:14.766 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:57:25.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 20 12:57:25.635: INFO: namespace kubectl-7424 Feb 20 12:57:25.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7424' Feb 20 12:57:26.107: INFO: stderr: "" Feb 20 12:57:26.107: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 20 12:57:27.114: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:27.114: INFO: Found 0 / 1 Feb 20 12:57:28.114: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:28.114: INFO: Found 0 / 1 Feb 20 12:57:29.133: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:29.133: INFO: Found 0 / 1 Feb 20 12:57:30.119: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:30.119: INFO: Found 0 / 1 Feb 20 12:57:31.113: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:31.113: INFO: Found 0 / 1 Feb 20 12:57:32.121: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:32.121: INFO: Found 0 / 1 Feb 20 12:57:33.117: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:33.117: INFO: Found 0 / 1 Feb 20 12:57:34.116: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:34.116: INFO: Found 1 / 1 Feb 20 12:57:34.116: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 12:57:34.118: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:57:34.118: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 20 12:57:34.118: INFO: wait on redis-master startup in kubectl-7424 Feb 20 12:57:34.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d6p5h redis-master --namespace=kubectl-7424' Feb 20 12:57:34.233: INFO: stderr: "" Feb 20 12:57:34.233: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 12:57:32.164 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 12:57:32.165 # Server started, Redis version 3.2.12\n1:M 20 Feb 12:57:32.165 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 12:57:32.165 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 20 12:57:34.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7424' Feb 20 12:57:34.446: INFO: stderr: "" Feb 20 12:57:34.446: INFO: stdout: "service/rm2 exposed\n" Feb 20 12:57:34.471: INFO: Service rm2 in namespace kubectl-7424 found. STEP: exposing service Feb 20 12:57:36.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7424' Feb 20 12:57:36.789: INFO: stderr: "" Feb 20 12:57:36.789: INFO: stdout: "service/rm3 exposed\n" Feb 20 12:57:36.799: INFO: Service rm3 in namespace kubectl-7424 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:57:38.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7424" for this suite. Feb 20 12:58:02.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:58:02.978: INFO: namespace kubectl-7424 deletion completed in 24.148671982s • [SLOW TEST:37.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:58:02.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a Feb 20 12:58:03.109: INFO: Pod name my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a: Found 0 pods out of 1 Feb 20 12:58:08.116: INFO: Pod name my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a: Found 1 pods out of 1 Feb 20 12:58:08.116: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a" are running Feb 20 12:58:10.125: INFO: Pod "my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a-57497" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:58:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:58:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:58:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:58:03 +0000 UTC Reason: Message:}]) Feb 20 12:58:10.125: INFO: Trying to dial the pod Feb 20 12:58:15.152: INFO: Controller my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a: Got expected result from replica 1 [my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a-57497]: "my-hostname-basic-0a45f3f4-4721-4a40-be26-2e3508f8ae3a-57497", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:58:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9152" for this suite. Feb 20 12:58:21.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:58:21.299: INFO: namespace replication-controller-9152 deletion completed in 6.142086029s • [SLOW TEST:18.321 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:58:21.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e1d7d88b-4839-4a8b-a49a-76966406e966 STEP: Creating a pod to test consume secrets Feb 20 12:58:21.369: INFO: Waiting up to 5m0s for pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8" in namespace "secrets-3846" to be "success or failure" Feb 20 12:58:21.389: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.684379ms Feb 20 12:58:23.398: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028932592s Feb 20 12:58:25.409: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039561032s Feb 20 12:58:27.415: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04585613s Feb 20 12:58:29.426: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056270004s Feb 20 12:58:31.436: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066453821s STEP: Saw pod success Feb 20 12:58:31.436: INFO: Pod "pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8" satisfied condition "success or failure" Feb 20 12:58:31.439: INFO: Trying to get logs from node iruya-node pod pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8 container secret-volume-test: STEP: delete the pod Feb 20 12:58:31.516: INFO: Waiting for pod pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8 to disappear Feb 20 12:58:31.525: INFO: Pod pod-secrets-552abdda-91e6-4a02-ae79-d9768c38dca8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:58:31.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3846" for this suite. Feb 20 12:58:37.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:58:37.705: INFO: namespace secrets-3846 deletion completed in 6.175968574s • [SLOW TEST:16.406 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:58:37.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 12:58:37.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327" in namespace "projected-3010" to be "success or failure" Feb 20 12:58:37.798: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Pending", Reason="", readiness=false. Elapsed: 22.681154ms Feb 20 12:58:39.809: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033955639s Feb 20 12:58:41.816: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0408487s Feb 20 12:58:43.828: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052923393s Feb 20 12:58:45.840: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064912531s Feb 20 12:58:47.850: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074607274s STEP: Saw pod success Feb 20 12:58:47.850: INFO: Pod "downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327" satisfied condition "success or failure" Feb 20 12:58:47.856: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327 container client-container: STEP: delete the pod Feb 20 12:58:47.914: INFO: Waiting for pod downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327 to disappear Feb 20 12:58:47.929: INFO: Pod downwardapi-volume-7d5cf01b-31d4-43d4-94d9-55ce55071327 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:58:47.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3010" for this suite. Feb 20 12:58:53.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:58:54.075: INFO: namespace projected-3010 deletion completed in 6.138927933s • [SLOW TEST:16.370 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:58:54.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-418bea81-dd34-459f-be7b-e0411f1b6a5d STEP: Creating a pod to test consume configMaps Feb 20 12:58:54.196: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700" in namespace "projected-6040" to be "success or failure" Feb 20 12:58:54.208: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700": Phase="Pending", Reason="", readiness=false. Elapsed: 11.663013ms Feb 20 12:58:56.216: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020065819s Feb 20 12:58:58.221: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02480591s Feb 20 12:59:00.237: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041546704s Feb 20 12:59:02.245: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04913056s STEP: Saw pod success Feb 20 12:59:02.245: INFO: Pod "pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700" satisfied condition "success or failure" Feb 20 12:59:02.248: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700 container projected-configmap-volume-test: STEP: delete the pod Feb 20 12:59:02.411: INFO: Waiting for pod pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700 to disappear Feb 20 12:59:02.429: INFO: Pod pod-projected-configmaps-239f8d9a-dbc6-4800-bfda-de977cb36700 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:59:02.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6040" for this suite. Feb 20 12:59:08.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:59:08.579: INFO: namespace projected-6040 deletion completed in 6.143787599s • [SLOW TEST:14.504 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:59:08.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:59:08.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7446" for this suite. Feb 20 12:59:30.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:59:30.905: INFO: namespace pods-7446 deletion completed in 22.212085976s • [SLOW TEST:22.325 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:59:30.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-5cb44b73-54cc-497d-ac3e-1b7eacd87c5f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:59:30.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-990" for this suite. Feb 20 12:59:37.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:59:37.121: INFO: namespace secrets-990 deletion completed in 6.120360353s • [SLOW TEST:6.216 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:59:37.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 20 12:59:45.456: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 12:59:45.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6481" for this suite. Feb 20 12:59:51.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:59:51.662: INFO: namespace container-runtime-6481 deletion completed in 6.179748969s • [SLOW TEST:14.540 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 12:59:51.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5283.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5283.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5283.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5283.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5283.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5283.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 13:00:05.936: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.941: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.946: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5283.svc.cluster.local from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.949: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.953: INFO: Unable to read jessie_udp@PodARecord from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.961: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881: the server could not find the requested resource (get pods dns-test-fe596747-23c5-4ae3-b249-1f56a5654881) Feb 20 13:00:05.961: INFO: Lookups using dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5283.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 20 13:00:11.073: INFO: DNS probes using dns-5283/dns-test-fe596747-23c5-4ae3-b249-1f56a5654881 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:00:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5283" for this suite. Feb 20 13:00:17.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:00:17.339: INFO: namespace dns-5283 deletion completed in 6.157887703s • [SLOW TEST:25.677 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:00:17.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 20 13:00:33.588: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:33.638: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:35.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:35.647: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:37.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:37.647: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:39.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:39.649: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:41.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:41.647: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:43.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:43.648: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:45.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:45.649: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:47.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:47.646: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:49.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:49.648: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:51.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:51.648: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:53.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:53.647: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 13:00:55.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 13:00:55.651: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:00:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-736" for this suite. Feb 20 13:01:17.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:01:17.849: INFO: namespace container-lifecycle-hook-736 deletion completed in 22.160166541s • [SLOW TEST:60.509 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:01:17.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2223 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 13:01:17.992: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 13:01:54.157: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2223 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:01:54.157: INFO: >>> kubeConfig: /root/.kube/config I0220 13:01:54.219940 8 log.go:172] (0xc00142e370) (0xc001a32460) Create stream I0220 13:01:54.219983 8 log.go:172] (0xc00142e370) (0xc001a32460) Stream added, broadcasting: 1 I0220 13:01:54.225297 8 log.go:172] (0xc00142e370) Reply frame received for 1 I0220 13:01:54.225357 8 log.go:172] (0xc00142e370) (0xc001a00780) Create stream I0220 13:01:54.225367 8 log.go:172] (0xc00142e370) (0xc001a00780) Stream added, broadcasting: 3 I0220 13:01:54.229388 8 log.go:172] (0xc00142e370) Reply frame received for 3 I0220 13:01:54.229414 8 log.go:172] (0xc00142e370) (0xc001a32500) Create stream I0220 13:01:54.229419 8 log.go:172] (0xc00142e370) (0xc001a32500) Stream added, broadcasting: 5 I0220 13:01:54.233226 8 log.go:172] (0xc00142e370) Reply frame received for 5 I0220 13:01:55.426493 8 log.go:172] (0xc00142e370) Data frame received for 3 I0220 13:01:55.426532 8 log.go:172] (0xc001a00780) (3) Data frame handling I0220 13:01:55.426573 8 log.go:172] (0xc001a00780) (3) Data frame sent I0220 13:01:55.601025 8 log.go:172] (0xc00142e370) Data frame received for 1 I0220 13:01:55.601062 8 log.go:172] (0xc001a32460) (1) Data frame handling I0220 13:01:55.601084 8 log.go:172] (0xc001a32460) (1) Data frame sent I0220 13:01:55.601375 8 log.go:172] (0xc00142e370) (0xc001a32460) Stream removed, broadcasting: 1 I0220 13:01:55.605106 8 log.go:172] (0xc00142e370) (0xc001a00780) Stream removed, broadcasting: 3 I0220 13:01:55.605172 8 log.go:172] (0xc00142e370) (0xc001a32500) Stream removed, broadcasting: 5 I0220 13:01:55.605210 8 log.go:172] (0xc00142e370) (0xc001a32460) Stream removed, broadcasting: 1 I0220 13:01:55.605223 8 log.go:172] (0xc00142e370) (0xc001a00780) Stream removed, broadcasting: 3 I0220 13:01:55.605280 8 log.go:172] (0xc00142e370) (0xc001a32500) Stream removed, broadcasting: 5 I0220 13:01:55.605518 8 log.go:172] (0xc00142e370) Go away received Feb 20 13:01:55.605: INFO: Found all expected endpoints: [netserver-0] Feb 20 13:01:55.615: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2223 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:01:55.615: INFO: >>> kubeConfig: /root/.kube/config I0220 13:01:55.702846 8 log.go:172] (0xc0019504d0) (0xc001e4f9a0) Create stream I0220 13:01:55.702894 8 log.go:172] (0xc0019504d0) (0xc001e4f9a0) Stream added, broadcasting: 1 I0220 13:01:55.718502 8 log.go:172] (0xc0019504d0) Reply frame received for 1 I0220 13:01:55.718645 8 log.go:172] (0xc0019504d0) (0xc001378000) Create stream I0220 13:01:55.718671 8 log.go:172] (0xc0019504d0) (0xc001378000) Stream added, broadcasting: 3 I0220 13:01:55.721255 8 log.go:172] (0xc0019504d0) Reply frame received for 3 I0220 13:01:55.721286 8 log.go:172] (0xc0019504d0) (0xc001a32640) Create stream I0220 13:01:55.721297 8 log.go:172] (0xc0019504d0) (0xc001a32640) Stream added, broadcasting: 5 I0220 13:01:55.723782 8 log.go:172] (0xc0019504d0) Reply frame received for 5 I0220 13:01:56.879253 8 log.go:172] (0xc0019504d0) Data frame received for 3 I0220 13:01:56.879310 8 log.go:172] (0xc001378000) (3) Data frame handling I0220 13:01:56.879327 8 log.go:172] (0xc001378000) (3) Data frame sent I0220 13:01:56.998669 8 log.go:172] (0xc0019504d0) (0xc001378000) Stream removed, broadcasting: 3 I0220 13:01:56.998755 8 log.go:172] (0xc0019504d0) Data frame received for 1 I0220 13:01:56.998769 8 log.go:172] (0xc001e4f9a0) (1) Data frame handling I0220 13:01:56.998785 8 log.go:172] (0xc001e4f9a0) (1) Data frame sent I0220 13:01:56.998795 8 log.go:172] (0xc0019504d0) (0xc001e4f9a0) Stream removed, broadcasting: 1 I0220 13:01:56.999018 8 log.go:172] (0xc0019504d0) (0xc001a32640) Stream removed, broadcasting: 5 I0220 13:01:56.999048 8 log.go:172] (0xc0019504d0) (0xc001e4f9a0) Stream removed, broadcasting: 1 I0220 13:01:56.999064 8 log.go:172] (0xc0019504d0) (0xc001378000) Stream removed, broadcasting: 3 I0220 13:01:56.999077 8 log.go:172] (0xc0019504d0) (0xc001a32640) Stream removed, broadcasting: 5 Feb 20 13:01:56.999: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:01:56.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0220 13:01:56.999579 8 log.go:172] (0xc0019504d0) Go away received STEP: Destroying namespace "pod-network-test-2223" for this suite. Feb 20 13:02:21.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:02:21.163: INFO: namespace pod-network-test-2223 deletion completed in 24.150601948s • [SLOW TEST:63.313 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:02:21.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-9jfh STEP: Creating a pod to test atomic-volume-subpath Feb 20 13:02:21.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9jfh" in namespace "subpath-7343" to be "success or failure" Feb 20 13:02:21.357: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Pending", Reason="", readiness=false. Elapsed: 26.661565ms Feb 20 13:02:23.370: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039917618s Feb 20 13:02:25.376: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045764768s Feb 20 13:02:27.392: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062290385s Feb 20 13:02:29.411: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 8.080987514s Feb 20 13:02:31.423: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 10.09262446s Feb 20 13:02:33.431: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 12.101151138s Feb 20 13:02:35.441: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 14.111192458s Feb 20 13:02:37.795: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 16.464632842s Feb 20 13:02:39.802: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 18.47238999s Feb 20 13:02:41.811: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 20.480852989s Feb 20 13:02:43.822: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 22.492292509s Feb 20 13:02:45.833: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 24.502683s Feb 20 13:02:47.841: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 26.510938106s Feb 20 13:02:49.878: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Running", Reason="", readiness=true. Elapsed: 28.54783741s Feb 20 13:02:51.887: INFO: Pod "pod-subpath-test-downwardapi-9jfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.557469187s STEP: Saw pod success Feb 20 13:02:51.888: INFO: Pod "pod-subpath-test-downwardapi-9jfh" satisfied condition "success or failure" Feb 20 13:02:51.891: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-9jfh container test-container-subpath-downwardapi-9jfh: STEP: delete the pod Feb 20 13:02:51.974: INFO: Waiting for pod pod-subpath-test-downwardapi-9jfh to disappear Feb 20 13:02:52.039: INFO: Pod pod-subpath-test-downwardapi-9jfh no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9jfh Feb 20 13:02:52.039: INFO: Deleting pod "pod-subpath-test-downwardapi-9jfh" in namespace "subpath-7343" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:02:52.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7343" for this suite. Feb 20 13:02:58.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:02:58.221: INFO: namespace subpath-7343 deletion completed in 6.143741991s • [SLOW TEST:37.058 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:02:58.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-4aca2442-104c-4033-9869-31932669915e in namespace container-probe-8120 Feb 20 13:03:06.477: INFO: Started pod busybox-4aca2442-104c-4033-9869-31932669915e in namespace container-probe-8120 STEP: checking the pod's current state and verifying that restartCount is present Feb 20 13:03:06.481: INFO: Initial restart count of pod busybox-4aca2442-104c-4033-9869-31932669915e is 0 Feb 20 13:04:01.170: INFO: Restart count of pod container-probe-8120/busybox-4aca2442-104c-4033-9869-31932669915e is now 1 (54.688612485s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:04:01.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8120" for this suite. Feb 20 13:04:07.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:04:07.328: INFO: namespace container-probe-8120 deletion completed in 6.132590745s • [SLOW TEST:69.106 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:04:07.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:04:17.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2535" for this suite. Feb 20 13:05:09.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:05:09.740: INFO: namespace kubelet-test-2535 deletion completed in 52.217762412s • [SLOW TEST:62.412 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:05:09.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-5c6a1730-eda3-4f58-a3b1-ab1744d38afb STEP: Creating a pod to test consume secrets Feb 20 13:05:09.952: INFO: Waiting up to 5m0s for pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32" in namespace "secrets-7446" to be "success or failure" Feb 20 13:05:09.971: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32": Phase="Pending", Reason="", readiness=false. Elapsed: 19.612015ms Feb 20 13:05:11.980: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028698052s Feb 20 13:05:13.991: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038771909s Feb 20 13:05:16.002: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049934088s Feb 20 13:05:18.011: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059315693s STEP: Saw pod success Feb 20 13:05:18.011: INFO: Pod "pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32" satisfied condition "success or failure" Feb 20 13:05:18.017: INFO: Trying to get logs from node iruya-node pod pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32 container secret-volume-test: STEP: delete the pod Feb 20 13:05:18.061: INFO: Waiting for pod pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32 to disappear Feb 20 13:05:18.107: INFO: Pod pod-secrets-cb265aa2-7eb6-458c-bacb-765e81da0c32 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:05:18.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7446" for this suite. Feb 20 13:05:24.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:05:24.354: INFO: namespace secrets-7446 deletion completed in 6.242491448s • [SLOW TEST:14.614 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:05:24.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 20 13:05:24.452: INFO: Waiting up to 5m0s for pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731" in namespace "downward-api-4125" to be "success or failure" Feb 20 13:05:24.466: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02489ms Feb 20 13:05:26.509: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057342186s Feb 20 13:05:28.521: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069107934s Feb 20 13:05:30.531: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079401357s Feb 20 13:05:32.544: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092534349s STEP: Saw pod success Feb 20 13:05:32.544: INFO: Pod "downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731" satisfied condition "success or failure" Feb 20 13:05:32.548: INFO: Trying to get logs from node iruya-node pod downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731 container dapi-container: STEP: delete the pod Feb 20 13:05:32.678: INFO: Waiting for pod downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731 to disappear Feb 20 13:05:32.715: INFO: Pod downward-api-fdc460ab-d8b6-4902-90c6-ede6ec7a2731 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:05:32.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4125" for this suite. Feb 20 13:05:38.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:05:38.915: INFO: namespace downward-api-4125 deletion completed in 6.196486645s • [SLOW TEST:14.561 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:05:38.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:05:39.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987" in namespace "projected-4377" to be "success or failure" Feb 20 13:05:39.067: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50765ms Feb 20 13:05:41.087: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026617428s Feb 20 13:05:43.092: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03139444s Feb 20 13:05:45.114: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053468474s Feb 20 13:05:47.123: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062343063s STEP: Saw pod success Feb 20 13:05:47.123: INFO: Pod "downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987" satisfied condition "success or failure" Feb 20 13:05:47.129: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987 container client-container: STEP: delete the pod Feb 20 13:05:47.299: INFO: Waiting for pod downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987 to disappear Feb 20 13:05:47.309: INFO: Pod downwardapi-volume-944cf669-13f3-425e-ab15-62b8f2bf4987 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:05:47.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4377" for this suite. Feb 20 13:05:53.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:05:53.458: INFO: namespace projected-4377 deletion completed in 6.138719924s • [SLOW TEST:14.543 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:05:53.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9489ab1d-24fb-42a1-8fab-ff1f5417789d STEP: Creating configMap with name cm-test-opt-upd-cdafa46e-7518-48ac-a0d6-bd03409204fa STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9489ab1d-24fb-42a1-8fab-ff1f5417789d STEP: Updating configmap cm-test-opt-upd-cdafa46e-7518-48ac-a0d6-bd03409204fa STEP: Creating configMap with name cm-test-opt-create-23e0fa53-a246-46d4-9bc1-e0402a7b6afc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:06:07.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5730" for this suite. Feb 20 13:06:32.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:06:32.218: INFO: namespace configmap-5730 deletion completed in 24.226468506s • [SLOW TEST:38.760 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:06:32.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:06:32.328: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 20 13:06:37.338: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 13:06:41.360: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 20 13:06:43.385: INFO: Creating deployment "test-rollover-deployment" Feb 20 13:06:43.408: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 20 13:06:45.433: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 20 13:06:45.445: INFO: Ensure that both replica sets have 1 created replica Feb 20 13:06:45.450: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 20 13:06:45.467: INFO: Updating deployment test-rollover-deployment Feb 20 13:06:45.467: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 20 13:06:47.490: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 20 13:06:47.499: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 20 13:06:47.503: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:47.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:49.517: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:49.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:51.532: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:51.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:53.516: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:53.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:55.519: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:55.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800814, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:57.569: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:57.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800814, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:06:59.551: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:06:59.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800814, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:07:01.521: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:07:01.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800814, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:07:03.514: INFO: all replica sets need to contain the pod-template-hash label Feb 20 13:07:03.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800814, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717800803, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:07:05.521: INFO: Feb 20 13:07:05.521: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 20 13:07:05.545: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-463,SelfLink:/apis/apps/v1/namespaces/deployment-463/deployments/test-rollover-deployment,UID:c6356e65-434c-48d9-a0f0-eb70a04b12b9,ResourceVersion:25071562,Generation:2,CreationTimestamp:2020-02-20 13:06:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-20 13:06:43 +0000 UTC 2020-02-20 13:06:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-20 13:07:04 +0000 UTC 2020-02-20 13:06:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 20 13:07:05.580: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-463,SelfLink:/apis/apps/v1/namespaces/deployment-463/replicasets/test-rollover-deployment-854595fc44,UID:7fab8a3f-f665-438a-8a06-392b67f5492f,ResourceVersion:25071550,Generation:2,CreationTimestamp:2020-02-20 13:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c6356e65-434c-48d9-a0f0-eb70a04b12b9 0xc001a81ea7 0xc001a81ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 20 13:07:05.580: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 20 13:07:05.580: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-463,SelfLink:/apis/apps/v1/namespaces/deployment-463/replicasets/test-rollover-controller,UID:f8d5b853-3476-49e6-9911-70032142fb59,ResourceVersion:25071559,Generation:2,CreationTimestamp:2020-02-20 13:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c6356e65-434c-48d9-a0f0-eb70a04b12b9 0xc001a81dd7 0xc001a81dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:07:05.580: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-463,SelfLink:/apis/apps/v1/namespaces/deployment-463/replicasets/test-rollover-deployment-9b8b997cf,UID:98c477ad-9cb1-469e-8ae8-63f21a3e7e3e,ResourceVersion:25071509,Generation:2,CreationTimestamp:2020-02-20 13:06:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c6356e65-434c-48d9-a0f0-eb70a04b12b9 0xc001a81f70 0xc001a81f71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:07:05.585: INFO: Pod "test-rollover-deployment-854595fc44-s2t82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-s2t82,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-463,SelfLink:/api/v1/namespaces/deployment-463/pods/test-rollover-deployment-854595fc44-s2t82,UID:d350d230-6d99-4c1e-b8f7-8957ccbeb3e8,ResourceVersion:25071533,Generation:0,CreationTimestamp:2020-02-20 13:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 7fab8a3f-f665-438a-8a06-392b67f5492f 0xc002b41ca7 0xc002b41ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5btwr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5btwr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5btwr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b41d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b41d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:06:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:06:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:06:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:06:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-20 13:06:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-20 13:06:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a321b0d5986fda6b9f437b993240384317735915d72199ed4e3e0029baec1d0a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:07:05.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-463" for this suite. Feb 20 13:07:11.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:07:11.811: INFO: namespace deployment-463 deletion completed in 6.221287153s • [SLOW TEST:39.593 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:07:11.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 20 13:07:11.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9255' Feb 20 13:07:14.923: INFO: stderr: "" Feb 20 13:07:14.924: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 20 13:07:15.932: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:15.932: INFO: Found 0 / 1 Feb 20 13:07:16.939: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:16.939: INFO: Found 0 / 1 Feb 20 13:07:17.937: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:17.937: INFO: Found 0 / 1 Feb 20 13:07:18.931: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:18.931: INFO: Found 0 / 1 Feb 20 13:07:19.935: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:19.935: INFO: Found 0 / 1 Feb 20 13:07:20.929: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:20.929: INFO: Found 0 / 1 Feb 20 13:07:21.934: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:21.934: INFO: Found 0 / 1 Feb 20 13:07:22.931: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:22.932: INFO: Found 0 / 1 Feb 20 13:07:23.936: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:23.936: INFO: Found 1 / 1 Feb 20 13:07:23.936: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 13:07:23.940: INFO: Selector matched 1 pods for map[app:redis] Feb 20 13:07:23.940: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 20 13:07:23.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255' Feb 20 13:07:24.103: INFO: stderr: "" Feb 20 13:07:24.103: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 13:07:21.932 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 13:07:21.932 # Server started, Redis version 3.2.12\n1:M 20 Feb 13:07:21.932 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 13:07:21.932 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 20 13:07:24.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255 --tail=1' Feb 20 13:07:24.198: INFO: stderr: "" Feb 20 13:07:24.198: INFO: stdout: "1:M 20 Feb 13:07:21.932 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 20 13:07:24.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255 --limit-bytes=1' Feb 20 13:07:24.319: INFO: stderr: "" Feb 20 13:07:24.319: INFO: stdout: " " STEP: exposing timestamps Feb 20 13:07:24.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255 --tail=1 --timestamps' Feb 20 13:07:24.427: INFO: stderr: "" Feb 20 13:07:24.427: INFO: stdout: "2020-02-20T13:07:21.934252895Z 1:M 20 Feb 13:07:21.932 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 20 13:07:26.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255 --since=1s' Feb 20 13:07:27.076: INFO: stderr: "" Feb 20 13:07:27.076: INFO: stdout: "" Feb 20 13:07:27.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nm8m redis-master --namespace=kubectl-9255 --since=24h' Feb 20 13:07:27.225: INFO: stderr: "" Feb 20 13:07:27.225: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 13:07:21.932 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 13:07:21.932 # Server started, Redis version 3.2.12\n1:M 20 Feb 13:07:21.932 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 13:07:21.932 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 20 13:07:27.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9255' Feb 20 13:07:27.353: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 13:07:27.353: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 20 13:07:27.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9255' Feb 20 13:07:27.486: INFO: stderr: "No resources found.\n" Feb 20 13:07:27.486: INFO: stdout: "" Feb 20 13:07:27.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9255 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 13:07:27.639: INFO: stderr: "" Feb 20 13:07:27.639: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:07:27.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9255" for this suite. Feb 20 13:07:49.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:07:49.806: INFO: namespace kubectl-9255 deletion completed in 22.16364877s • [SLOW TEST:37.994 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:07:49.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 20 13:08:00.591: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 20 13:08:10.739: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:08:11.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3681" for this suite. Feb 20 13:08:17.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:08:17.416: INFO: namespace pods-3681 deletion completed in 6.166659751s • [SLOW TEST:27.609 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:08:17.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 20 13:08:28.223: INFO: Successfully updated pod "annotationupdate4e6579c9-fc98-4afe-a47a-6ce95064660e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:08:30.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4978" for this suite. Feb 20 13:08:52.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:08:52.473: INFO: namespace projected-4978 deletion completed in 22.116025097s • [SLOW TEST:35.057 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:08:52.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:08:52.711: INFO: Create a RollingUpdate DaemonSet Feb 20 13:08:52.717: INFO: Check that daemon pods launch on every node of the cluster Feb 20 13:08:52.832: INFO: Number of nodes with available pods: 0 Feb 20 13:08:52.832: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:08:54.869: INFO: Number of nodes with available pods: 0 Feb 20 13:08:54.869: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:08:55.872: INFO: Number of nodes with available pods: 0 Feb 20 13:08:55.872: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:08:56.862: INFO: Number of nodes with available pods: 0 Feb 20 13:08:56.862: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:08:57.839: INFO: Number of nodes with available pods: 0 Feb 20 13:08:57.839: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:01.625: INFO: Number of nodes with available pods: 0 Feb 20 13:09:01.625: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:01.855: INFO: Number of nodes with available pods: 0 Feb 20 13:09:01.855: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:02.840: INFO: Number of nodes with available pods: 0 Feb 20 13:09:02.840: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:03.894: INFO: Number of nodes with available pods: 0 Feb 20 13:09:03.894: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:04.852: INFO: Number of nodes with available pods: 0 Feb 20 13:09:04.852: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:05.875: INFO: Number of nodes with available pods: 1 Feb 20 13:09:05.875: INFO: Node iruya-node is running more than one daemon pod Feb 20 13:09:06.848: INFO: Number of nodes with available pods: 2 Feb 20 13:09:06.848: INFO: Number of running nodes: 2, number of available pods: 2 Feb 20 13:09:06.848: INFO: Update the DaemonSet to trigger a rollout Feb 20 13:09:06.867: INFO: Updating DaemonSet daemon-set Feb 20 13:09:18.060: INFO: Roll back the DaemonSet before rollout is complete Feb 20 13:09:18.095: INFO: Updating DaemonSet daemon-set Feb 20 13:09:18.095: INFO: Make sure DaemonSet rollback is complete Feb 20 13:09:18.140: INFO: Wrong image for pod: daemon-set-d5f66. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 20 13:09:18.140: INFO: Pod daemon-set-d5f66 is not available Feb 20 13:09:19.154: INFO: Wrong image for pod: daemon-set-d5f66. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 20 13:09:19.154: INFO: Pod daemon-set-d5f66 is not available Feb 20 13:09:20.292: INFO: Wrong image for pod: daemon-set-d5f66. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 20 13:09:20.292: INFO: Pod daemon-set-d5f66 is not available Feb 20 13:09:21.151: INFO: Wrong image for pod: daemon-set-d5f66. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 20 13:09:21.151: INFO: Pod daemon-set-d5f66 is not available Feb 20 13:09:22.156: INFO: Wrong image for pod: daemon-set-d5f66. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 20 13:09:22.156: INFO: Pod daemon-set-d5f66 is not available Feb 20 13:09:23.180: INFO: Pod daemon-set-rtzck is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-726, will wait for the garbage collector to delete the pods Feb 20 13:09:23.267: INFO: Deleting DaemonSet.extensions daemon-set took: 21.33922ms Feb 20 13:09:24.767: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.500476744s Feb 20 13:09:36.604: INFO: Number of nodes with available pods: 0 Feb 20 13:09:36.604: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 13:09:36.609: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-726/daemonsets","resourceVersion":"25071949"},"items":null} Feb 20 13:09:36.613: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-726/pods","resourceVersion":"25071949"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:09:36.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-726" for this suite. Feb 20 13:09:42.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:09:42.786: INFO: namespace daemonsets-726 deletion completed in 6.143217983s • [SLOW TEST:50.312 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:09:42.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 20 13:09:42.856: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix331982941/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:09:42.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3375" for this suite. Feb 20 13:09:48.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:09:49.048: INFO: namespace kubectl-3375 deletion completed in 6.119277258s • [SLOW TEST:6.261 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:09:49.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 20 13:09:56.564: INFO: 10 pods remaining Feb 20 13:09:56.564: INFO: 10 pods has nil DeletionTimestamp Feb 20 13:09:56.564: INFO: Feb 20 13:09:58.484: INFO: 9 pods remaining Feb 20 13:09:58.485: INFO: 0 pods has nil DeletionTimestamp Feb 20 13:09:58.485: INFO: STEP: Gathering metrics W0220 13:09:59.204439 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 13:09:59.204: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:09:59.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9382" for this suite. Feb 20 13:10:13.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:10:13.575: INFO: namespace gc-9382 deletion completed in 14.368127379s • [SLOW TEST:24.526 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:10:13.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Feb 20 13:10:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-25 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 20 13:10:23.811: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0220 13:10:21.900928 391 log.go:172] (0xc00085e0b0) (0xc0008343c0) Create stream\nI0220 13:10:21.900982 391 log.go:172] (0xc00085e0b0) (0xc0008343c0) Stream added, broadcasting: 1\nI0220 13:10:21.910247 391 log.go:172] (0xc00085e0b0) Reply frame received for 1\nI0220 13:10:21.910302 391 log.go:172] (0xc00085e0b0) (0xc000834000) Create stream\nI0220 13:10:21.910317 391 log.go:172] (0xc00085e0b0) (0xc000834000) Stream added, broadcasting: 3\nI0220 13:10:21.912665 391 log.go:172] (0xc00085e0b0) Reply frame received for 3\nI0220 13:10:21.912699 391 log.go:172] (0xc00085e0b0) (0xc0006640a0) Create stream\nI0220 13:10:21.912710 391 log.go:172] (0xc00085e0b0) (0xc0006640a0) Stream added, broadcasting: 5\nI0220 13:10:21.914688 391 log.go:172] (0xc00085e0b0) Reply frame received for 5\nI0220 13:10:21.914728 391 log.go:172] (0xc00085e0b0) (0xc000664140) Create stream\nI0220 13:10:21.914738 391 log.go:172] (0xc00085e0b0) (0xc000664140) Stream added, broadcasting: 7\nI0220 13:10:21.918032 391 log.go:172] (0xc00085e0b0) Reply frame received for 7\nI0220 13:10:21.918206 391 log.go:172] (0xc000834000) (3) Writing data frame\nI0220 13:10:21.918384 391 log.go:172] (0xc000834000) (3) Writing data frame\nI0220 13:10:21.940029 391 log.go:172] (0xc00085e0b0) Data frame received for 5\nI0220 13:10:21.940090 391 log.go:172] (0xc0006640a0) (5) Data frame handling\nI0220 13:10:21.940105 391 log.go:172] (0xc0006640a0) (5) Data frame sent\nI0220 13:10:21.948453 391 log.go:172] (0xc00085e0b0) Data frame received for 5\nI0220 13:10:21.948476 391 log.go:172] (0xc0006640a0) (5) Data frame handling\nI0220 13:10:21.948491 391 log.go:172] (0xc0006640a0) (5) Data frame sent\nI0220 13:10:23.766632 391 log.go:172] (0xc00085e0b0) (0xc000834000) Stream removed, broadcasting: 3\nI0220 13:10:23.766749 391 log.go:172] (0xc00085e0b0) Data frame received for 1\nI0220 13:10:23.766790 391 log.go:172] (0xc00085e0b0) (0xc000664140) Stream removed, broadcasting: 7\nI0220 13:10:23.766834 391 log.go:172] (0xc0008343c0) (1) Data frame handling\nI0220 13:10:23.766980 391 log.go:172] (0xc0008343c0) (1) Data frame sent\nI0220 13:10:23.767057 391 log.go:172] (0xc00085e0b0) (0xc0006640a0) Stream removed, broadcasting: 5\nI0220 13:10:23.767100 391 log.go:172] (0xc00085e0b0) (0xc0008343c0) Stream removed, broadcasting: 1\nI0220 13:10:23.767205 391 log.go:172] (0xc00085e0b0) Go away received\nI0220 13:10:23.767485 391 log.go:172] (0xc00085e0b0) (0xc0008343c0) Stream removed, broadcasting: 1\nI0220 13:10:23.767532 391 log.go:172] (0xc00085e0b0) (0xc000834000) Stream removed, broadcasting: 3\nI0220 13:10:23.767542 391 log.go:172] (0xc00085e0b0) (0xc0006640a0) Stream removed, broadcasting: 5\nI0220 13:10:23.767552 391 log.go:172] (0xc00085e0b0) (0xc000664140) Stream removed, broadcasting: 7\n" Feb 20 13:10:23.811: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:10:25.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-25" for this suite. Feb 20 13:10:31.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:10:31.981: INFO: namespace kubectl-25 deletion completed in 6.156497193s • [SLOW TEST:18.406 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:10:31.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:10:32.131: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 20 13:10:37.143: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 13:10:41.157: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 20 13:10:41.189: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3619,SelfLink:/apis/apps/v1/namespaces/deployment-3619/deployments/test-cleanup-deployment,UID:a3e964f6-3cc7-452d-8646-6cf83ff4a03e,ResourceVersion:25072222,Generation:1,CreationTimestamp:2020-02-20 13:10:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 20 13:10:41.223: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3619,SelfLink:/apis/apps/v1/namespaces/deployment-3619/replicasets/test-cleanup-deployment-55bbcbc84c,UID:0e22122e-3edb-45fb-9cac-8eb17ec4e672,ResourceVersion:25072224,Generation:1,CreationTimestamp:2020-02-20 13:10:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a3e964f6-3cc7-452d-8646-6cf83ff4a03e 0xc002dec107 0xc002dec108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:10:41.223: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 20 13:10:41.223: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3619,SelfLink:/apis/apps/v1/namespaces/deployment-3619/replicasets/test-cleanup-controller,UID:6b93be6f-5907-41f1-bf14-b404f4cc65f9,ResourceVersion:25072223,Generation:1,CreationTimestamp:2020-02-20 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a3e964f6-3cc7-452d-8646-6cf83ff4a03e 0xc002dec01f 0xc002dec030}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 20 13:10:41.238: INFO: Pod "test-cleanup-controller-9hrtr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-9hrtr,GenerateName:test-cleanup-controller-,Namespace:deployment-3619,SelfLink:/api/v1/namespaces/deployment-3619/pods/test-cleanup-controller-9hrtr,UID:f3ce83ff-1376-4efc-856f-ea338dfedcc0,ResourceVersion:25072217,Generation:0,CreationTimestamp:2020-02-20 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 6b93be6f-5907-41f1-bf14-b404f4cc65f9 0xc002959b17 0xc002959b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnj9w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnj9w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnj9w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002959b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002959bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:10:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:10:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:10:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:10:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-20 13:10:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:10:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://aaab97c1024b1477fbc142763368295df8fcac94dbcc875b61699b3365d361e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:10:41.239: INFO: Pod "test-cleanup-deployment-55bbcbc84c-j5rjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-j5rjv,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3619,SelfLink:/api/v1/namespaces/deployment-3619/pods/test-cleanup-deployment-55bbcbc84c-j5rjv,UID:17eba299-0938-4ef0-958f-6e487efa6c46,ResourceVersion:25072225,Generation:0,CreationTimestamp:2020-02-20 13:10:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 0e22122e-3edb-45fb-9cac-8eb17ec4e672 0xc002959c97 0xc002959c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnj9w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnj9w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wnj9w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002959d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002959d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:10:41.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3619" for this suite. Feb 20 13:10:47.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:10:47.971: INFO: namespace deployment-3619 deletion completed in 6.667697005s • [SLOW TEST:15.990 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:10:47.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b22ed1fb-527e-4a89-b4a5-e3d81856c2d5 in namespace container-probe-7248 Feb 20 13:10:58.255: INFO: Started pod test-webserver-b22ed1fb-527e-4a89-b4a5-e3d81856c2d5 in namespace container-probe-7248 STEP: checking the pod's current state and verifying that restartCount is present Feb 20 13:10:58.260: INFO: Initial restart count of pod test-webserver-b22ed1fb-527e-4a89-b4a5-e3d81856c2d5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:15:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7248" for this suite. Feb 20 13:15:06.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:15:06.274: INFO: namespace container-probe-7248 deletion completed in 6.166540142s • [SLOW TEST:258.303 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:15:06.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-6c924755-eb3c-4df7-807e-2aae5819173a STEP: Creating a pod to test consume secrets Feb 20 13:15:06.353: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff" in namespace "projected-721" to be "success or failure" Feb 20 13:15:06.402: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff": Phase="Pending", Reason="", readiness=false. Elapsed: 48.759918ms Feb 20 13:15:08.410: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056366477s Feb 20 13:15:10.419: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065442444s Feb 20 13:15:12.426: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072074712s Feb 20 13:15:14.442: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088851568s STEP: Saw pod success Feb 20 13:15:14.442: INFO: Pod "pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff" satisfied condition "success or failure" Feb 20 13:15:14.449: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff container projected-secret-volume-test: STEP: delete the pod Feb 20 13:15:14.541: INFO: Waiting for pod pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff to disappear Feb 20 13:15:14.573: INFO: Pod pod-projected-secrets-ea199054-9ce7-4590-9f56-4929b94b1dff no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:15:14.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-721" for this suite. Feb 20 13:15:20.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:15:20.741: INFO: namespace projected-721 deletion completed in 6.159010651s • [SLOW TEST:14.467 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:15:20.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:15:20.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471" in namespace "projected-7709" to be "success or failure" Feb 20 13:15:20.818: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471": Phase="Pending", Reason="", readiness=false. Elapsed: 16.675311ms Feb 20 13:15:22.827: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026042554s Feb 20 13:15:24.834: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032821682s Feb 20 13:15:26.856: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055202785s Feb 20 13:15:28.868: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066438849s STEP: Saw pod success Feb 20 13:15:28.868: INFO: Pod "downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471" satisfied condition "success or failure" Feb 20 13:15:28.876: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471 container client-container: STEP: delete the pod Feb 20 13:15:28.962: INFO: Waiting for pod downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471 to disappear Feb 20 13:15:28.965: INFO: Pod downwardapi-volume-2181daba-5017-4758-87f4-51576e1f4471 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:15:28.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7709" for this suite. Feb 20 13:15:34.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:15:35.106: INFO: namespace projected-7709 deletion completed in 6.136205479s • [SLOW TEST:14.365 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:15:35.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:15:41.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-312" for this suite. Feb 20 13:15:47.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:15:47.617: INFO: namespace emptydir-wrapper-312 deletion completed in 6.125844207s • [SLOW TEST:12.510 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:15:47.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:16:11.794: INFO: Container started at 2020-02-20 13:15:54 +0000 UTC, pod became ready at 2020-02-20 13:16:10 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:16:11.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8938" for this suite. Feb 20 13:16:33.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:16:33.933: INFO: namespace container-probe-8938 deletion completed in 22.130128862s • [SLOW TEST:46.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:16:33.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3796145a-3dc7-40d4-9245-24a848b78e7c STEP: Creating a pod to test consume secrets Feb 20 13:16:34.246: INFO: Waiting up to 5m0s for pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc" in namespace "secrets-7964" to be "success or failure" Feb 20 13:16:34.257: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.96186ms Feb 20 13:16:36.270: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024031391s Feb 20 13:16:38.277: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030307472s Feb 20 13:16:40.285: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038430548s Feb 20 13:16:42.291: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044867271s STEP: Saw pod success Feb 20 13:16:42.291: INFO: Pod "pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc" satisfied condition "success or failure" Feb 20 13:16:42.294: INFO: Trying to get logs from node iruya-node pod pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc container secret-volume-test: STEP: delete the pod Feb 20 13:16:42.618: INFO: Waiting for pod pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc to disappear Feb 20 13:16:42.625: INFO: Pod pod-secrets-b750eb14-624d-4b5e-8746-30476afd14cc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:16:42.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7964" for this suite. Feb 20 13:16:48.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:16:48.868: INFO: namespace secrets-7964 deletion completed in 6.238305765s STEP: Destroying namespace "secret-namespace-7520" for this suite. Feb 20 13:16:54.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:16:55.033: INFO: namespace secret-namespace-7520 deletion completed in 6.164531578s • [SLOW TEST:21.099 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:16:55.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:16:55.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9647" for this suite. Feb 20 13:17:01.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:17:01.202: INFO: namespace services-9647 deletion completed in 6.092831851s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.169 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:17:01.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 20 13:17:09.962: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3439 pod-service-account-555d8705-20fc-4d5a-a6d9-db69056b6bf0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 20 13:17:10.462: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3439 pod-service-account-555d8705-20fc-4d5a-a6d9-db69056b6bf0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 20 13:17:10.961: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3439 pod-service-account-555d8705-20fc-4d5a-a6d9-db69056b6bf0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:17:11.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3439" for this suite. Feb 20 13:17:19.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:17:19.739: INFO: namespace svcaccounts-3439 deletion completed in 8.182003926s • [SLOW TEST:18.537 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:17:19.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6342/secret-test-522a83dc-d74e-446c-abc6-21b72e960192 STEP: Creating a pod to test consume secrets Feb 20 13:17:19.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565" in namespace "secrets-6342" to be "success or failure" Feb 20 13:17:19.960: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Pending", Reason="", readiness=false. Elapsed: 22.58272ms Feb 20 13:17:21.965: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027613874s Feb 20 13:17:23.973: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035197403s Feb 20 13:17:26.053: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11509449s Feb 20 13:17:28.113: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174951458s Feb 20 13:17:30.120: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182107104s STEP: Saw pod success Feb 20 13:17:30.120: INFO: Pod "pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565" satisfied condition "success or failure" Feb 20 13:17:30.123: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565 container env-test: STEP: delete the pod Feb 20 13:17:30.266: INFO: Waiting for pod pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565 to disappear Feb 20 13:17:30.286: INFO: Pod pod-configmaps-c86c9534-035b-4e5e-a52c-8fd6aae3f565 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:17:30.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6342" for this suite. Feb 20 13:17:36.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:17:36.473: INFO: namespace secrets-6342 deletion completed in 6.177299546s • [SLOW TEST:16.733 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:17:36.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 20 13:17:36.590: INFO: Waiting up to 5m0s for pod "pod-b53136ec-9035-4531-884c-8787660c2385" in namespace "emptydir-4921" to be "success or failure" Feb 20 13:17:36.635: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385": Phase="Pending", Reason="", readiness=false. Elapsed: 45.167627ms Feb 20 13:17:38.644: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053963894s Feb 20 13:17:40.650: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060320857s Feb 20 13:17:42.661: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071021861s Feb 20 13:17:44.679: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088704259s STEP: Saw pod success Feb 20 13:17:44.679: INFO: Pod "pod-b53136ec-9035-4531-884c-8787660c2385" satisfied condition "success or failure" Feb 20 13:17:44.687: INFO: Trying to get logs from node iruya-node pod pod-b53136ec-9035-4531-884c-8787660c2385 container test-container: STEP: delete the pod Feb 20 13:17:44.852: INFO: Waiting for pod pod-b53136ec-9035-4531-884c-8787660c2385 to disappear Feb 20 13:17:44.870: INFO: Pod pod-b53136ec-9035-4531-884c-8787660c2385 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:17:44.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4921" for this suite. Feb 20 13:17:50.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:17:51.024: INFO: namespace emptydir-4921 deletion completed in 6.145996227s • [SLOW TEST:14.550 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:17:51.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qppjh in namespace proxy-2093 I0220 13:17:51.324309 8 runners.go:180] Created replication controller with name: proxy-service-qppjh, namespace: proxy-2093, replica count: 1 I0220 13:17:52.375249 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:53.375565 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:54.375939 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:55.376265 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:56.376528 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:57.376774 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:58.377068 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:17:59.377400 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 13:18:00.377680 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 13:18:01.377915 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 13:18:02.378334 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 13:18:03.378737 8 runners.go:180] proxy-service-qppjh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 20 13:18:03.386: INFO: setup took 12.188256513s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 20 13:18:03.425: INFO: (0) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 38.52032ms) Feb 20 13:18:03.425: INFO: (0) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 38.485684ms) Feb 20 13:18:03.426: INFO: (0) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 39.093559ms) Feb 20 13:18:03.426: INFO: (0) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 39.172552ms) Feb 20 13:18:03.426: INFO: (0) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 39.129754ms) Feb 20 13:18:03.426: INFO: (0) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 39.777447ms) Feb 20 13:18:03.426: INFO: (0) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 39.795114ms) Feb 20 13:18:03.427: INFO: (0) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 39.965105ms) Feb 20 13:18:03.427: INFO: (0) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 39.984352ms) Feb 20 13:18:03.429: INFO: (0) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 42.122282ms) Feb 20 13:18:03.429: INFO: (0) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 42.246574ms) Feb 20 13:18:03.441: INFO: (0) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 54.585024ms) Feb 20 13:18:03.442: INFO: (0) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 55.519038ms) Feb 20 13:18:03.442: INFO: (0) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 55.445888ms) Feb 20 13:18:03.442: INFO: (0) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 13.719972ms) Feb 20 13:18:03.461: INFO: (1) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 16.224627ms) Feb 20 13:18:03.461: INFO: (1) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 16.10824ms) Feb 20 13:18:03.462: INFO: (1) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 16.761994ms) Feb 20 13:18:03.462: INFO: (1) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 17.435708ms) Feb 20 13:18:03.462: INFO: (1) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 17.32001ms) Feb 20 13:18:03.464: INFO: (1) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 19.015992ms) Feb 20 13:18:03.466: INFO: (1) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 20.382062ms) Feb 20 13:18:03.466: INFO: (1) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 20.437985ms) Feb 20 13:18:03.466: INFO: (1) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 20.494782ms) Feb 20 13:18:03.468: INFO: (1) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 22.642834ms) Feb 20 13:18:03.483: INFO: (2) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 14.834775ms) Feb 20 13:18:03.484: INFO: (2) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 15.408355ms) Feb 20 13:18:03.484: INFO: (2) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 15.987981ms) Feb 20 13:18:03.485: INFO: (2) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 17.362157ms) Feb 20 13:18:03.485: INFO: (2) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 17.207584ms) Feb 20 13:18:03.486: INFO: (2) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 18.27979ms) Feb 20 13:18:03.487: INFO: (2) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 18.696706ms) Feb 20 13:18:03.487: INFO: (2) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 18.926936ms) Feb 20 13:18:03.487: INFO: (2) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 19.120741ms) Feb 20 13:18:03.487: INFO: (2) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 19.26933ms) Feb 20 13:18:03.488: INFO: (2) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 19.696008ms) Feb 20 13:18:03.490: INFO: (2) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 21.833902ms) Feb 20 13:18:03.490: INFO: (2) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 22.150426ms) Feb 20 13:18:03.490: INFO: (2) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 22.335069ms) Feb 20 13:18:03.493: INFO: (2) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 24.92203ms) Feb 20 13:18:03.506: INFO: (3) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 12.916444ms) Feb 20 13:18:03.506: INFO: (3) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 13.22973ms) Feb 20 13:18:03.507: INFO: (3) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 13.780068ms) Feb 20 13:18:03.507: INFO: (3) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 13.901515ms) Feb 20 13:18:03.507: INFO: (3) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 13.903803ms) Feb 20 13:18:03.508: INFO: (3) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 14.662628ms) Feb 20 13:18:03.508: INFO: (3) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 15.120042ms) Feb 20 13:18:03.508: INFO: (3) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 15.192763ms) Feb 20 13:18:03.509: INFO: (3) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: ... (200; 19.539332ms) Feb 20 13:18:03.513: INFO: (3) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 19.591446ms) Feb 20 13:18:03.513: INFO: (3) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 20.334732ms) Feb 20 13:18:03.514: INFO: (3) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 20.495334ms) Feb 20 13:18:03.525: INFO: (4) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 11.4491ms) Feb 20 13:18:03.527: INFO: (4) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 13.663771ms) Feb 20 13:18:03.527: INFO: (4) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 27.959151ms) Feb 20 13:18:03.542: INFO: (4) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 28.005138ms) Feb 20 13:18:03.542: INFO: (4) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 27.99006ms) Feb 20 13:18:03.542: INFO: (4) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 28.429206ms) Feb 20 13:18:03.543: INFO: (4) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 28.932303ms) Feb 20 13:18:03.543: INFO: (4) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 29.122802ms) Feb 20 13:18:03.545: INFO: (4) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 31.037365ms) Feb 20 13:18:03.545: INFO: (4) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 31.398559ms) Feb 20 13:18:03.547: INFO: (4) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 33.11448ms) Feb 20 13:18:03.556: INFO: (5) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: ... (200; 11.161502ms) Feb 20 13:18:03.559: INFO: (5) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 11.406268ms) Feb 20 13:18:03.559: INFO: (5) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 11.851935ms) Feb 20 13:18:03.559: INFO: (5) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 12.050847ms) Feb 20 13:18:03.559: INFO: (5) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 12.119328ms) Feb 20 13:18:03.561: INFO: (5) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 13.793591ms) Feb 20 13:18:03.562: INFO: (5) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 14.829522ms) Feb 20 13:18:03.568: INFO: (5) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 20.322613ms) Feb 20 13:18:03.569: INFO: (5) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 21.694791ms) Feb 20 13:18:03.569: INFO: (5) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 21.86949ms) Feb 20 13:18:03.569: INFO: (5) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 21.963328ms) Feb 20 13:18:03.570: INFO: (5) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 22.667731ms) Feb 20 13:18:03.573: INFO: (5) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 26.004558ms) Feb 20 13:18:03.574: INFO: (5) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 27.009359ms) Feb 20 13:18:03.576: INFO: (5) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 28.287835ms) Feb 20 13:18:03.605: INFO: (6) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 29.160755ms) Feb 20 13:18:03.605: INFO: (6) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 29.062796ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 29.541166ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 29.825788ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 29.815577ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 30.093787ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 30.004258ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 29.817495ms) Feb 20 13:18:03.606: INFO: (6) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 32.567183ms) Feb 20 13:18:03.608: INFO: (6) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 32.264899ms) Feb 20 13:18:03.608: INFO: (6) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 32.485945ms) Feb 20 13:18:03.609: INFO: (6) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 32.832938ms) Feb 20 13:18:03.611: INFO: (6) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 35.368639ms) Feb 20 13:18:03.646: INFO: (7) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 34.18701ms) Feb 20 13:18:03.646: INFO: (7) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 34.408645ms) Feb 20 13:18:03.646: INFO: (7) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 34.179574ms) Feb 20 13:18:03.646: INFO: (7) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 34.45665ms) Feb 20 13:18:03.649: INFO: (7) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 37.209709ms) Feb 20 13:18:03.649: INFO: (7) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 37.328275ms) Feb 20 13:18:03.651: INFO: (7) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 39.015218ms) Feb 20 13:18:03.651: INFO: (7) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 39.537332ms) Feb 20 13:18:03.654: INFO: (7) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 42.790643ms) Feb 20 13:18:03.654: INFO: (7) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 42.780005ms) Feb 20 13:18:03.655: INFO: (7) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 43.59671ms) Feb 20 13:18:03.656: INFO: (7) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 44.134598ms) Feb 20 13:18:03.669: INFO: (8) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 12.449544ms) Feb 20 13:18:03.669: INFO: (8) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 12.929749ms) Feb 20 13:18:03.675: INFO: (8) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 18.915445ms) Feb 20 13:18:03.678: INFO: (8) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 21.943856ms) Feb 20 13:18:03.680: INFO: (8) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 24.301382ms) Feb 20 13:18:03.680: INFO: (8) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 24.471016ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 24.678042ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 24.833331ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 25.221304ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 25.077562ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 25.348562ms) Feb 20 13:18:03.681: INFO: (8) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 25.232671ms) Feb 20 13:18:03.682: INFO: (8) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 26.041464ms) Feb 20 13:18:03.682: INFO: (8) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 16.949787ms) Feb 20 13:18:03.705: INFO: (9) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 19.628763ms) Feb 20 13:18:03.705: INFO: (9) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 20.470604ms) Feb 20 13:18:03.709: INFO: (9) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 24.406783ms) Feb 20 13:18:03.710: INFO: (9) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 24.668901ms) Feb 20 13:18:03.711: INFO: (9) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 26.42486ms) Feb 20 13:18:03.712: INFO: (9) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 11.973265ms) Feb 20 13:18:03.731: INFO: (10) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 11.914061ms) Feb 20 13:18:03.731: INFO: (10) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 12.423416ms) Feb 20 13:18:03.731: INFO: (10) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 12.569714ms) Feb 20 13:18:03.731: INFO: (10) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 8.410113ms) Feb 20 13:18:03.744: INFO: (11) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 8.478152ms) Feb 20 13:18:03.744: INFO: (11) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 8.647952ms) Feb 20 13:18:03.744: INFO: (11) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 8.837346ms) Feb 20 13:18:03.747: INFO: (11) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 11.536634ms) Feb 20 13:18:03.747: INFO: (11) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 8.082831ms) Feb 20 13:18:03.758: INFO: (12) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 8.267634ms) Feb 20 13:18:03.759: INFO: (12) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 8.378158ms) Feb 20 13:18:03.759: INFO: (12) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 8.767742ms) Feb 20 13:18:03.759: INFO: (12) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: ... (200; 13.927345ms) Feb 20 13:18:03.771: INFO: (12) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 21.386981ms) Feb 20 13:18:03.772: INFO: (12) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 21.471989ms) Feb 20 13:18:03.772: INFO: (12) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 22.041311ms) Feb 20 13:18:03.772: INFO: (12) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 22.17073ms) Feb 20 13:18:03.773: INFO: (12) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 23.202561ms) Feb 20 13:18:03.781: INFO: (12) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 30.560353ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 23.42628ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 23.664494ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 23.100544ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 23.524728ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 23.19282ms) Feb 20 13:18:03.805: INFO: (13) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 23.323776ms) Feb 20 13:18:03.806: INFO: (13) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 24.353882ms) Feb 20 13:18:03.806: INFO: (13) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 24.839659ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 25.482035ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 24.994508ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 25.320421ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 25.833651ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 25.203175ms) Feb 20 13:18:03.807: INFO: (13) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 32.264968ms) Feb 20 13:18:03.840: INFO: (14) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 32.284143ms) Feb 20 13:18:03.840: INFO: (14) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 32.505047ms) Feb 20 13:18:03.840: INFO: (14) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 32.3896ms) Feb 20 13:18:03.841: INFO: (14) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 32.870528ms) Feb 20 13:18:03.841: INFO: (14) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 32.930005ms) Feb 20 13:18:03.841: INFO: (14) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 32.934508ms) Feb 20 13:18:03.841: INFO: (14) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 33.040394ms) Feb 20 13:18:03.842: INFO: (14) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 33.820272ms) Feb 20 13:18:03.843: INFO: (14) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 34.88394ms) Feb 20 13:18:03.864: INFO: (15) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 20.711036ms) Feb 20 13:18:03.865: INFO: (15) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 23.100054ms) Feb 20 13:18:03.867: INFO: (15) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 23.370046ms) Feb 20 13:18:03.867: INFO: (15) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 23.155603ms) Feb 20 13:18:03.867: INFO: (15) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 23.531518ms) Feb 20 13:18:03.867: INFO: (15) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 23.514067ms) Feb 20 13:18:03.867: INFO: (15) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 23.442143ms) Feb 20 13:18:03.874: INFO: (15) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 30.621854ms) Feb 20 13:18:03.874: INFO: (15) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 31.273596ms) Feb 20 13:18:03.874: INFO: (15) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 30.976105ms) Feb 20 13:18:03.874: INFO: (15) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 31.305052ms) Feb 20 13:18:03.882: INFO: (16) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 7.27548ms) Feb 20 13:18:03.906: INFO: (16) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 29.767104ms) Feb 20 13:18:03.906: INFO: (16) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 30.992828ms) Feb 20 13:18:03.906: INFO: (16) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 29.898676ms) Feb 20 13:18:03.907: INFO: (16) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 31.154036ms) Feb 20 13:18:03.907: INFO: (16) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 31.263801ms) Feb 20 13:18:03.907: INFO: (16) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 31.783853ms) Feb 20 13:18:03.907: INFO: (16) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 31.52995ms) Feb 20 13:18:03.911: INFO: (16) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 35.994399ms) Feb 20 13:18:03.912: INFO: (16) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 36.394455ms) Feb 20 13:18:03.912: INFO: (16) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 35.798629ms) Feb 20 13:18:03.913: INFO: (16) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 37.279042ms) Feb 20 13:18:03.913: INFO: (16) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 37.886171ms) Feb 20 13:18:03.913: INFO: (16) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 37.333279ms) Feb 20 13:18:03.913: INFO: (16) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test (200; 6.351977ms) Feb 20 13:18:03.920: INFO: (17) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: ... (200; 6.635135ms) Feb 20 13:18:03.920: INFO: (17) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 6.859713ms) Feb 20 13:18:03.921: INFO: (17) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 7.269696ms) Feb 20 13:18:03.921: INFO: (17) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 7.652676ms) Feb 20 13:18:03.922: INFO: (17) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 8.272175ms) Feb 20 13:18:03.922: INFO: (17) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 8.348315ms) Feb 20 13:18:03.926: INFO: (17) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 12.68142ms) Feb 20 13:18:03.927: INFO: (17) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 13.585936ms) Feb 20 13:18:03.929: INFO: (17) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 15.795716ms) Feb 20 13:18:03.930: INFO: (17) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 15.832408ms) Feb 20 13:18:03.932: INFO: (17) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 18.306249ms) Feb 20 13:18:03.933: INFO: (17) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 18.854439ms) Feb 20 13:18:03.933: INFO: (17) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname2/proxy/: bar (200; 19.039364ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 45.885598ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:1080/proxy/: test<... (200; 45.780141ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 46.009675ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 46.029474ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 46.369131ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 46.404515ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 46.435788ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 46.452883ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 46.429104ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 46.556827ms) Feb 20 13:18:03.979: INFO: (18) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 46.484778ms) Feb 20 13:18:03.980: INFO: (18) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:443/proxy/: test<... (200; 20.298542ms) Feb 20 13:18:04.003: INFO: (19) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname2/proxy/: tls qux (200; 21.132661ms) Feb 20 13:18:04.003: INFO: (19) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname2/proxy/: bar (200; 21.905253ms) Feb 20 13:18:04.004: INFO: (19) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:460/proxy/: tls baz (200; 23.185939ms) Feb 20 13:18:04.005: INFO: (19) /api/v1/namespaces/proxy-2093/services/https:proxy-service-qppjh:tlsportname1/proxy/: tls baz (200; 23.295541ms) Feb 20 13:18:04.005: INFO: (19) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:160/proxy/: foo (200; 23.262462ms) Feb 20 13:18:04.005: INFO: (19) /api/v1/namespaces/proxy-2093/services/proxy-service-qppjh:portname1/proxy/: foo (200; 23.465873ms) Feb 20 13:18:04.007: INFO: (19) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:1080/proxy/: ... (200; 25.28441ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:160/proxy/: foo (200; 26.29186ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/pods/https:proxy-service-qppjh-22hpf:462/proxy/: tls qux (200; 26.222644ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf:162/proxy/: bar (200; 26.818478ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/pods/proxy-service-qppjh-22hpf/proxy/: test (200; 26.70841ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/pods/http:proxy-service-qppjh-22hpf:162/proxy/: bar (200; 26.526591ms) Feb 20 13:18:04.008: INFO: (19) /api/v1/namespaces/proxy-2093/services/http:proxy-service-qppjh:portname1/proxy/: foo (200; 27.090307ms) STEP: deleting ReplicationController proxy-service-qppjh in namespace proxy-2093, will wait for the garbage collector to delete the pods Feb 20 13:18:04.080: INFO: Deleting ReplicationController proxy-service-qppjh took: 15.288387ms Feb 20 13:18:04.381: INFO: Terminating ReplicationController proxy-service-qppjh pods took: 300.433417ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:18:09.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2093" for this suite. Feb 20 13:18:15.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:18:15.322: INFO: namespace proxy-2093 deletion completed in 6.132367123s • [SLOW TEST:24.298 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:18:15.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 in namespace container-probe-2079 Feb 20 13:18:23.463: INFO: Started pod liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 in namespace container-probe-2079 STEP: checking the pod's current state and verifying that restartCount is present Feb 20 13:18:23.468: INFO: Initial restart count of pod liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is 0 Feb 20 13:18:45.586: INFO: Restart count of pod container-probe-2079/liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is now 1 (22.117433376s elapsed) Feb 20 13:19:05.727: INFO: Restart count of pod container-probe-2079/liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is now 2 (42.258551708s elapsed) Feb 20 13:19:25.828: INFO: Restart count of pod container-probe-2079/liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is now 3 (1m2.360037714s elapsed) Feb 20 13:19:45.947: INFO: Restart count of pod container-probe-2079/liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is now 4 (1m22.478863375s elapsed) Feb 20 13:20:54.499: INFO: Restart count of pod container-probe-2079/liveness-00ea7421-3579-4c06-b27c-505bdeda8e71 is now 5 (2m31.030913717s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:20:54.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2079" for this suite. Feb 20 13:21:00.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:21:01.089: INFO: namespace container-probe-2079 deletion completed in 6.219181438s • [SLOW TEST:165.766 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:21:01.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 20 13:21:09.232: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-95863ba1-da62-4eb0-a3a4-24cf40ddb754,GenerateName:,Namespace:events-4175,SelfLink:/api/v1/namespaces/events-4175/pods/send-events-95863ba1-da62-4eb0-a3a4-24cf40ddb754,UID:76293d6b-027a-4bdc-b82c-3e658412e459,ResourceVersion:25073445,Generation:0,CreationTimestamp:2020-02-20 13:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 176232109,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r7wpg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r7wpg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-r7wpg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026c2290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026c2380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:21:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:21:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:21:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:21:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-20 13:21:01 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-20 13:21:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://90e97f722d054226511d13aab146ca274f7c3b71cca0d60821626b3d5f367d5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 20 13:21:11.249: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 20 13:21:13.264: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:21:13.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4175" for this suite. Feb 20 13:21:51.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:21:51.433: INFO: namespace events-4175 deletion completed in 38.132391229s • [SLOW TEST:50.344 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:21:51.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9ad3a9fa-5e37-47f2-b22a-281740d230fa STEP: Creating a pod to test consume secrets Feb 20 13:21:51.591: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510" in namespace "projected-3181" to be "success or failure" Feb 20 13:21:51.668: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510": Phase="Pending", Reason="", readiness=false. Elapsed: 76.038734ms Feb 20 13:21:53.673: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081735219s Feb 20 13:21:55.682: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090333983s Feb 20 13:21:57.696: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510": Phase="Running", Reason="", readiness=true. Elapsed: 6.104260819s Feb 20 13:21:59.708: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116969565s STEP: Saw pod success Feb 20 13:21:59.709: INFO: Pod "pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510" satisfied condition "success or failure" Feb 20 13:21:59.713: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510 container projected-secret-volume-test: STEP: delete the pod Feb 20 13:21:59.894: INFO: Waiting for pod pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510 to disappear Feb 20 13:21:59.911: INFO: Pod pod-projected-secrets-60394fe4-e810-4024-bfc6-ca6293116510 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:21:59.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3181" for this suite. Feb 20 13:22:05.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:22:06.073: INFO: namespace projected-3181 deletion completed in 6.15633401s • [SLOW TEST:14.640 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:22:06.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 20 13:22:06.711: INFO: created pod pod-service-account-defaultsa Feb 20 13:22:06.711: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 20 13:22:07.242: INFO: created pod pod-service-account-mountsa Feb 20 13:22:07.242: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 20 13:22:07.282: INFO: created pod pod-service-account-nomountsa Feb 20 13:22:07.282: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 20 13:22:07.351: INFO: created pod pod-service-account-defaultsa-mountspec Feb 20 13:22:07.351: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 20 13:22:07.373: INFO: created pod pod-service-account-mountsa-mountspec Feb 20 13:22:07.373: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 20 13:22:07.398: INFO: created pod pod-service-account-nomountsa-mountspec Feb 20 13:22:07.398: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 20 13:22:07.422: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 20 13:22:07.422: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 20 13:22:07.557: INFO: created pod pod-service-account-mountsa-nomountspec Feb 20 13:22:07.557: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 20 13:22:07.608: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 20 13:22:07.608: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:22:07.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8211" for this suite. Feb 20 13:22:36.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:22:36.884: INFO: namespace svcaccounts-8211 deletion completed in 29.230041763s • [SLOW TEST:30.810 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:22:36.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-05b86ca3-8e7e-4c1b-9204-592261adc37e STEP: Creating secret with name s-test-opt-upd-39b55bb2-0815-4bf4-8e29-ddcc8cb19bbf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-05b86ca3-8e7e-4c1b-9204-592261adc37e STEP: Updating secret s-test-opt-upd-39b55bb2-0815-4bf4-8e29-ddcc8cb19bbf STEP: Creating secret with name s-test-opt-create-4f026760-4910-4f14-a5a4-546ab15b8148 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:24:05.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4791" for this suite. Feb 20 13:24:27.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:24:27.580: INFO: namespace secrets-4791 deletion completed in 22.16802902s • [SLOW TEST:110.696 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:24:27.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4458 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 20 13:24:27.707: INFO: Found 0 stateful pods, waiting for 3 Feb 20 13:24:37.716: INFO: Found 2 stateful pods, waiting for 3 Feb 20 13:24:47.723: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 13:24:47.723: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 13:24:47.723: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 13:24:57.715: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 13:24:57.715: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 13:24:57.715: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 20 13:24:57.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4458 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 13:25:00.450: INFO: stderr: "I0220 13:25:00.131870 469 log.go:172] (0xc000704210) (0xc00071e640) Create stream\nI0220 13:25:00.131935 469 log.go:172] (0xc000704210) (0xc00071e640) Stream added, broadcasting: 1\nI0220 13:25:00.135294 469 log.go:172] (0xc000704210) Reply frame received for 1\nI0220 13:25:00.135340 469 log.go:172] (0xc000704210) (0xc00071e6e0) Create stream\nI0220 13:25:00.135349 469 log.go:172] (0xc000704210) (0xc00071e6e0) Stream added, broadcasting: 3\nI0220 13:25:00.137276 469 log.go:172] (0xc000704210) Reply frame received for 3\nI0220 13:25:00.137309 469 log.go:172] (0xc000704210) (0xc0006143c0) Create stream\nI0220 13:25:00.137322 469 log.go:172] (0xc000704210) (0xc0006143c0) Stream added, broadcasting: 5\nI0220 13:25:00.138779 469 log.go:172] (0xc000704210) Reply frame received for 5\nI0220 13:25:00.297360 469 log.go:172] (0xc000704210) Data frame received for 5\nI0220 13:25:00.297389 469 log.go:172] (0xc0006143c0) (5) Data frame handling\nI0220 13:25:00.297407 469 log.go:172] (0xc0006143c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 13:25:00.357536 469 log.go:172] (0xc000704210) Data frame received for 3\nI0220 13:25:00.357604 469 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0220 13:25:00.357623 469 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0220 13:25:00.439525 469 log.go:172] (0xc000704210) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0220 13:25:00.439646 469 log.go:172] (0xc000704210) Data frame received for 1\nI0220 13:25:00.439685 469 log.go:172] (0xc000704210) (0xc0006143c0) Stream removed, broadcasting: 5\nI0220 13:25:00.439728 469 log.go:172] (0xc00071e640) (1) Data frame handling\nI0220 13:25:00.439754 469 log.go:172] (0xc00071e640) (1) Data frame sent\nI0220 13:25:00.439771 469 log.go:172] (0xc000704210) (0xc00071e640) Stream removed, broadcasting: 1\nI0220 13:25:00.439790 469 log.go:172] (0xc000704210) Go away received\nI0220 13:25:00.440734 469 log.go:172] (0xc000704210) (0xc00071e640) Stream removed, broadcasting: 1\nI0220 13:25:00.440770 469 log.go:172] (0xc000704210) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0220 13:25:00.440791 469 log.go:172] (0xc000704210) (0xc0006143c0) Stream removed, broadcasting: 5\n" Feb 20 13:25:00.450: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 13:25:00.450: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 20 13:25:10.510: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 20 13:25:20.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4458 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 13:25:20.867: INFO: stderr: "I0220 13:25:20.704636 499 log.go:172] (0xc0007b84d0) (0xc0008b26e0) Create stream\nI0220 13:25:20.704691 499 log.go:172] (0xc0007b84d0) (0xc0008b26e0) Stream added, broadcasting: 1\nI0220 13:25:20.707282 499 log.go:172] (0xc0007b84d0) Reply frame received for 1\nI0220 13:25:20.707341 499 log.go:172] (0xc0007b84d0) (0xc0004c40a0) Create stream\nI0220 13:25:20.707351 499 log.go:172] (0xc0007b84d0) (0xc0004c40a0) Stream added, broadcasting: 3\nI0220 13:25:20.709527 499 log.go:172] (0xc0007b84d0) Reply frame received for 3\nI0220 13:25:20.709564 499 log.go:172] (0xc0007b84d0) (0xc00010e000) Create stream\nI0220 13:25:20.709572 499 log.go:172] (0xc0007b84d0) (0xc00010e000) Stream added, broadcasting: 5\nI0220 13:25:20.710747 499 log.go:172] (0xc0007b84d0) Reply frame received for 5\nI0220 13:25:20.799721 499 log.go:172] (0xc0007b84d0) Data frame received for 5\nI0220 13:25:20.799755 499 log.go:172] (0xc00010e000) (5) Data frame handling\nI0220 13:25:20.799764 499 log.go:172] (0xc00010e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 13:25:20.799778 499 log.go:172] (0xc0007b84d0) Data frame received for 3\nI0220 13:25:20.799783 499 log.go:172] (0xc0004c40a0) (3) Data frame handling\nI0220 13:25:20.799790 499 log.go:172] (0xc0004c40a0) (3) Data frame sent\nI0220 13:25:20.859346 499 log.go:172] (0xc0007b84d0) (0xc0004c40a0) Stream removed, broadcasting: 3\nI0220 13:25:20.859457 499 log.go:172] (0xc0007b84d0) Data frame received for 1\nI0220 13:25:20.859472 499 log.go:172] (0xc0008b26e0) (1) Data frame handling\nI0220 13:25:20.859495 499 log.go:172] (0xc0008b26e0) (1) Data frame sent\nI0220 13:25:20.859510 499 log.go:172] (0xc0007b84d0) (0xc0008b26e0) Stream removed, broadcasting: 1\nI0220 13:25:20.859638 499 log.go:172] (0xc0007b84d0) (0xc00010e000) Stream removed, broadcasting: 5\nI0220 13:25:20.859695 499 log.go:172] (0xc0007b84d0) Go away received\nI0220 13:25:20.860012 499 log.go:172] (0xc0007b84d0) (0xc0008b26e0) Stream removed, broadcasting: 1\nI0220 13:25:20.860050 499 log.go:172] (0xc0007b84d0) (0xc0004c40a0) Stream removed, broadcasting: 3\nI0220 13:25:20.860076 499 log.go:172] (0xc0007b84d0) (0xc00010e000) Stream removed, broadcasting: 5\n" Feb 20 13:25:20.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 20 13:25:20.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 20 13:25:30.908: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:25:30.908: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:30.908: INFO: Waiting for Pod statefulset-4458/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:30.908: INFO: Waiting for Pod statefulset-4458/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:40.966: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:25:40.966: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:40.966: INFO: Waiting for Pod statefulset-4458/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:50.932: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:25:50.932: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:25:50.932: INFO: Waiting for Pod statefulset-4458/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:26:00.921: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:26:00.921: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:26:10.925: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:26:10.925: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 13:26:20.926: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update STEP: Rolling back to a previous revision Feb 20 13:26:30.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4458 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 13:26:31.415: INFO: stderr: "I0220 13:26:31.120789 517 log.go:172] (0xc0007f8160) (0xc0005cc460) Create stream\nI0220 13:26:31.120979 517 log.go:172] (0xc0007f8160) (0xc0005cc460) Stream added, broadcasting: 1\nI0220 13:26:31.124047 517 log.go:172] (0xc0007f8160) Reply frame received for 1\nI0220 13:26:31.124076 517 log.go:172] (0xc0007f8160) (0xc000842000) Create stream\nI0220 13:26:31.124088 517 log.go:172] (0xc0007f8160) (0xc000842000) Stream added, broadcasting: 3\nI0220 13:26:31.125258 517 log.go:172] (0xc0007f8160) Reply frame received for 3\nI0220 13:26:31.125291 517 log.go:172] (0xc0007f8160) (0xc0001f4000) Create stream\nI0220 13:26:31.125298 517 log.go:172] (0xc0007f8160) (0xc0001f4000) Stream added, broadcasting: 5\nI0220 13:26:31.126225 517 log.go:172] (0xc0007f8160) Reply frame received for 5\nI0220 13:26:31.257529 517 log.go:172] (0xc0007f8160) Data frame received for 5\nI0220 13:26:31.257628 517 log.go:172] (0xc0001f4000) (5) Data frame handling\nI0220 13:26:31.257647 517 log.go:172] (0xc0001f4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 13:26:31.310128 517 log.go:172] (0xc0007f8160) Data frame received for 3\nI0220 13:26:31.310171 517 log.go:172] (0xc000842000) (3) Data frame handling\nI0220 13:26:31.310185 517 log.go:172] (0xc000842000) (3) Data frame sent\nI0220 13:26:31.408678 517 log.go:172] (0xc0007f8160) Data frame received for 1\nI0220 13:26:31.408770 517 log.go:172] (0xc0007f8160) (0xc000842000) Stream removed, broadcasting: 3\nI0220 13:26:31.408817 517 log.go:172] (0xc0005cc460) (1) Data frame handling\nI0220 13:26:31.408828 517 log.go:172] (0xc0005cc460) (1) Data frame sent\nI0220 13:26:31.408850 517 log.go:172] (0xc0007f8160) (0xc0001f4000) Stream removed, broadcasting: 5\nI0220 13:26:31.408884 517 log.go:172] (0xc0007f8160) (0xc0005cc460) Stream removed, broadcasting: 1\nI0220 13:26:31.408894 517 log.go:172] (0xc0007f8160) Go away received\nI0220 13:26:31.409409 517 log.go:172] (0xc0007f8160) (0xc0005cc460) Stream removed, broadcasting: 1\nI0220 13:26:31.409425 517 log.go:172] (0xc0007f8160) (0xc000842000) Stream removed, broadcasting: 3\nI0220 13:26:31.409428 517 log.go:172] (0xc0007f8160) (0xc0001f4000) Stream removed, broadcasting: 5\n" Feb 20 13:26:31.415: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 13:26:31.415: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 20 13:26:41.462: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 20 13:26:51.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4458 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 13:26:51.934: INFO: stderr: "I0220 13:26:51.717252 532 log.go:172] (0xc0008e42c0) (0xc00075a640) Create stream\nI0220 13:26:51.717365 532 log.go:172] (0xc0008e42c0) (0xc00075a640) Stream added, broadcasting: 1\nI0220 13:26:51.720804 532 log.go:172] (0xc0008e42c0) Reply frame received for 1\nI0220 13:26:51.720836 532 log.go:172] (0xc0008e42c0) (0xc00056a320) Create stream\nI0220 13:26:51.720853 532 log.go:172] (0xc0008e42c0) (0xc00056a320) Stream added, broadcasting: 3\nI0220 13:26:51.722173 532 log.go:172] (0xc0008e42c0) Reply frame received for 3\nI0220 13:26:51.722199 532 log.go:172] (0xc0008e42c0) (0xc00056a3c0) Create stream\nI0220 13:26:51.722206 532 log.go:172] (0xc0008e42c0) (0xc00056a3c0) Stream added, broadcasting: 5\nI0220 13:26:51.723822 532 log.go:172] (0xc0008e42c0) Reply frame received for 5\nI0220 13:26:51.831309 532 log.go:172] (0xc0008e42c0) Data frame received for 3\nI0220 13:26:51.831352 532 log.go:172] (0xc00056a320) (3) Data frame handling\nI0220 13:26:51.831375 532 log.go:172] (0xc00056a320) (3) Data frame sent\nI0220 13:26:51.842093 532 log.go:172] (0xc0008e42c0) Data frame received for 5\nI0220 13:26:51.842176 532 log.go:172] (0xc00056a3c0) (5) Data frame handling\nI0220 13:26:51.842215 532 log.go:172] (0xc00056a3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 13:26:51.929013 532 log.go:172] (0xc0008e42c0) Data frame received for 1\nI0220 13:26:51.929037 532 log.go:172] (0xc00075a640) (1) Data frame handling\nI0220 13:26:51.929048 532 log.go:172] (0xc00075a640) (1) Data frame sent\nI0220 13:26:51.929472 532 log.go:172] (0xc0008e42c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0220 13:26:51.929975 532 log.go:172] (0xc0008e42c0) (0xc00056a320) Stream removed, broadcasting: 3\nI0220 13:26:51.930006 532 log.go:172] (0xc0008e42c0) (0xc00056a3c0) Stream removed, broadcasting: 5\nI0220 13:26:51.930026 532 log.go:172] (0xc0008e42c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0220 13:26:51.930035 532 log.go:172] (0xc0008e42c0) (0xc00056a320) Stream removed, broadcasting: 3\nI0220 13:26:51.930041 532 log.go:172] (0xc0008e42c0) (0xc00056a3c0) Stream removed, broadcasting: 5\n" Feb 20 13:26:51.934: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 20 13:26:51.934: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 20 13:27:02.007: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:27:02.007: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:02.007: INFO: Waiting for Pod statefulset-4458/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:02.007: INFO: Waiting for Pod statefulset-4458/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:12.027: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:27:12.027: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:12.027: INFO: Waiting for Pod statefulset-4458/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:22.028: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:27:22.028: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:32.034: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update Feb 20 13:27:32.034: INFO: Waiting for Pod statefulset-4458/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 20 13:27:42.038: INFO: Waiting for StatefulSet statefulset-4458/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 20 13:27:52.021: INFO: Deleting all statefulset in ns statefulset-4458 Feb 20 13:27:52.025: INFO: Scaling statefulset ss2 to 0 Feb 20 13:28:32.062: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 13:28:32.072: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:28:32.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4458" for this suite. Feb 20 13:28:40.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:28:40.272: INFO: namespace statefulset-4458 deletion completed in 8.173760148s • [SLOW TEST:252.693 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:28:40.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:28:40.377: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:28:41.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8761" for this suite. Feb 20 13:28:47.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:28:47.713: INFO: namespace custom-resource-definition-8761 deletion completed in 6.18952073s • [SLOW TEST:7.441 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:28:47.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9dd8e337-06b1-4cee-9227-10ea71fe9cf7 STEP: Creating a pod to test consume configMaps Feb 20 13:28:47.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252" in namespace "projected-8256" to be "success or failure" Feb 20 13:28:47.857: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203445ms Feb 20 13:28:49.870: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018959316s Feb 20 13:28:51.882: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031197566s Feb 20 13:28:53.892: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041186487s Feb 20 13:28:55.904: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05240272s STEP: Saw pod success Feb 20 13:28:55.904: INFO: Pod "pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252" satisfied condition "success or failure" Feb 20 13:28:55.911: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252 container projected-configmap-volume-test: STEP: delete the pod Feb 20 13:28:56.007: INFO: Waiting for pod pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252 to disappear Feb 20 13:28:56.018: INFO: Pod pod-projected-configmaps-1c32f5c1-cff3-4226-9b78-dca88bbd3252 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:28:56.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8256" for this suite. Feb 20 13:29:02.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:29:02.837: INFO: namespace projected-8256 deletion completed in 6.810506771s • [SLOW TEST:15.124 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:29:02.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 20 13:29:02.955: INFO: Waiting up to 5m0s for pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc" in namespace "downward-api-1810" to be "success or failure" Feb 20 13:29:02.958: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.749775ms Feb 20 13:29:04.965: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009900309s Feb 20 13:29:06.973: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017987298s Feb 20 13:29:08.981: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025942118s Feb 20 13:29:10.988: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033189056s STEP: Saw pod success Feb 20 13:29:10.988: INFO: Pod "downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc" satisfied condition "success or failure" Feb 20 13:29:10.991: INFO: Trying to get logs from node iruya-node pod downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc container dapi-container: STEP: delete the pod Feb 20 13:29:11.225: INFO: Waiting for pod downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc to disappear Feb 20 13:29:11.231: INFO: Pod downward-api-6095f4c1-75b8-4fb9-ba44-0f695566c3cc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:29:11.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1810" for this suite. Feb 20 13:29:17.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:29:17.518: INFO: namespace downward-api-1810 deletion completed in 6.277502268s • [SLOW TEST:14.680 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:29:17.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0220 13:29:20.863729 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 13:29:20.863: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:29:20.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6252" for this suite. Feb 20 13:29:26.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:29:27.013: INFO: namespace gc-6252 deletion completed in 6.143661828s • [SLOW TEST:9.496 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:29:27.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f0e67e41-42d8-4070-a22a-eceb79a8fa02 STEP: Creating a pod to test consume configMaps Feb 20 13:29:27.113: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f" in namespace "projected-3434" to be "success or failure" Feb 20 13:29:27.118: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.268169ms Feb 20 13:29:29.125: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011853608s Feb 20 13:29:31.133: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020392238s Feb 20 13:29:33.141: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028433455s Feb 20 13:29:35.148: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034725258s STEP: Saw pod success Feb 20 13:29:35.148: INFO: Pod "pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f" satisfied condition "success or failure" Feb 20 13:29:35.152: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f container projected-configmap-volume-test: STEP: delete the pod Feb 20 13:29:35.226: INFO: Waiting for pod pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f to disappear Feb 20 13:29:35.250: INFO: Pod pod-projected-configmaps-ff5f4de0-8825-4fb3-be31-bf493ef4528f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:29:35.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3434" for this suite. Feb 20 13:29:41.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:29:41.360: INFO: namespace projected-3434 deletion completed in 6.096567828s • [SLOW TEST:14.346 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:29:41.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-65a924bb-e531-4f1c-8676-5e0ef00e1e88 STEP: Creating a pod to test consume secrets Feb 20 13:29:41.436: INFO: Waiting up to 5m0s for pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f" in namespace "secrets-8579" to be "success or failure" Feb 20 13:29:41.443: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.667232ms Feb 20 13:29:43.451: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015102616s Feb 20 13:29:45.466: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03002023s Feb 20 13:29:47.474: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038702s Feb 20 13:29:49.485: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049547369s STEP: Saw pod success Feb 20 13:29:49.485: INFO: Pod "pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f" satisfied condition "success or failure" Feb 20 13:29:49.491: INFO: Trying to get logs from node iruya-node pod pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f container secret-volume-test: STEP: delete the pod Feb 20 13:29:49.566: INFO: Waiting for pod pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f to disappear Feb 20 13:29:49.598: INFO: Pod pod-secrets-e66bff67-7688-487d-9afc-359ce03a004f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:29:49.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8579" for this suite. Feb 20 13:29:55.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:29:55.852: INFO: namespace secrets-8579 deletion completed in 6.24506372s • [SLOW TEST:14.492 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:29:55.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:29:55.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0" in namespace "downward-api-3517" to be "success or failure" Feb 20 13:29:56.000: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029115ms Feb 20 13:29:58.013: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024572883s Feb 20 13:30:00.023: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034972874s Feb 20 13:30:02.034: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046099086s Feb 20 13:30:04.052: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064429292s STEP: Saw pod success Feb 20 13:30:04.053: INFO: Pod "downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0" satisfied condition "success or failure" Feb 20 13:30:04.061: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0 container client-container: STEP: delete the pod Feb 20 13:30:04.239: INFO: Waiting for pod downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0 to disappear Feb 20 13:30:04.245: INFO: Pod downwardapi-volume-d010aff4-c990-4d7e-8314-b26e29b216d0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:30:04.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3517" for this suite. Feb 20 13:30:10.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:30:10.562: INFO: namespace downward-api-3517 deletion completed in 6.299032604s • [SLOW TEST:14.708 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:30:10.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 20 13:30:11.393: INFO: Pod name wrapped-volume-race-7af14bd8-f1ff-4d6a-938e-b7c09b82a0a1: Found 0 pods out of 5 Feb 20 13:30:16.404: INFO: Pod name wrapped-volume-race-7af14bd8-f1ff-4d6a-938e-b7c09b82a0a1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7af14bd8-f1ff-4d6a-938e-b7c09b82a0a1 in namespace emptydir-wrapper-7317, will wait for the garbage collector to delete the pods Feb 20 13:30:42.510: INFO: Deleting ReplicationController wrapped-volume-race-7af14bd8-f1ff-4d6a-938e-b7c09b82a0a1 took: 16.168869ms Feb 20 13:30:42.810: INFO: Terminating ReplicationController wrapped-volume-race-7af14bd8-f1ff-4d6a-938e-b7c09b82a0a1 pods took: 300.423205ms STEP: Creating RC which spawns configmap-volume pods Feb 20 13:31:36.679: INFO: Pod name wrapped-volume-race-50ff6334-1ca2-48cc-bf36-5afd690e4210: Found 0 pods out of 5 Feb 20 13:31:41.693: INFO: Pod name wrapped-volume-race-50ff6334-1ca2-48cc-bf36-5afd690e4210: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-50ff6334-1ca2-48cc-bf36-5afd690e4210 in namespace emptydir-wrapper-7317, will wait for the garbage collector to delete the pods Feb 20 13:32:07.816: INFO: Deleting ReplicationController wrapped-volume-race-50ff6334-1ca2-48cc-bf36-5afd690e4210 took: 15.691475ms Feb 20 13:32:08.317: INFO: Terminating ReplicationController wrapped-volume-race-50ff6334-1ca2-48cc-bf36-5afd690e4210 pods took: 500.401643ms STEP: Creating RC which spawns configmap-volume pods Feb 20 13:32:50.074: INFO: Pod name wrapped-volume-race-c06dc82d-21df-458c-a72c-605ee1d5e61f: Found 0 pods out of 5 Feb 20 13:32:55.109: INFO: Pod name wrapped-volume-race-c06dc82d-21df-458c-a72c-605ee1d5e61f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c06dc82d-21df-458c-a72c-605ee1d5e61f in namespace emptydir-wrapper-7317, will wait for the garbage collector to delete the pods Feb 20 13:33:25.228: INFO: Deleting ReplicationController wrapped-volume-race-c06dc82d-21df-458c-a72c-605ee1d5e61f took: 15.18277ms Feb 20 13:33:25.628: INFO: Terminating ReplicationController wrapped-volume-race-c06dc82d-21df-458c-a72c-605ee1d5e61f pods took: 400.405361ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:34:17.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7317" for this suite. Feb 20 13:34:27.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:34:28.116: INFO: namespace emptydir-wrapper-7317 deletion completed in 10.189644162s • [SLOW TEST:257.553 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:34:28.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0220 13:35:08.310612 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 13:35:08.310: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:35:08.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9138" for this suite. Feb 20 13:35:28.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:35:28.504: INFO: namespace gc-9138 deletion completed in 20.186299287s • [SLOW TEST:60.388 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:35:28.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:35:28.591: INFO: Creating deployment "nginx-deployment" Feb 20 13:35:28.609: INFO: Waiting for observed generation 1 Feb 20 13:35:31.717: INFO: Waiting for all required pods to come up Feb 20 13:35:32.380: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 20 13:35:58.630: INFO: Waiting for deployment "nginx-deployment" to complete Feb 20 13:35:58.642: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 20 13:35:58.653: INFO: Updating deployment nginx-deployment Feb 20 13:35:58.654: INFO: Waiting for observed generation 2 Feb 20 13:36:01.189: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 20 13:36:01.972: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 20 13:36:02.276: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 20 13:36:02.300: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 20 13:36:02.300: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 20 13:36:02.304: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 20 13:36:02.310: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 20 13:36:02.310: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 20 13:36:02.323: INFO: Updating deployment nginx-deployment Feb 20 13:36:02.323: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 20 13:36:04.061: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 20 13:36:09.651: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 20 13:36:12.615: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5901,SelfLink:/apis/apps/v1/namespaces/deployment-5901/deployments/nginx-deployment,UID:795aaadb-1803-4662-b306-7988a8b06ae3,ResourceVersion:25076606,Generation:3,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-20 13:36:04 +0000 UTC 2020-02-20 13:36:04 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-20 13:36:09 +0000 UTC 2020-02-20 13:35:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 20 13:36:14.270: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5901,SelfLink:/apis/apps/v1/namespaces/deployment-5901/replicasets/nginx-deployment-55fb7cb77f,UID:a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d,ResourceVersion:25076604,Generation:3,CreationTimestamp:2020-02-20 13:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 795aaadb-1803-4662-b306-7988a8b06ae3 0xc002b41397 0xc002b41398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:36:14.270: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 20 13:36:14.270: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5901,SelfLink:/apis/apps/v1/namespaces/deployment-5901/replicasets/nginx-deployment-7b8c6f4498,UID:06a3e467-8329-4841-86a7-a8419f9b8ba6,ResourceVersion:25076595,Generation:3,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 795aaadb-1803-4662-b306-7988a8b06ae3 0xc002b41467 0xc002b41468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 20 13:36:16.753: INFO: Pod "nginx-deployment-55fb7cb77f-59hh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-59hh8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-59hh8,UID:ff00a597-8a51-4f2b-9ee0-69dd44004e2e,ResourceVersion:25076534,Generation:0,CreationTimestamp:2020-02-20 13:35:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6197 0xc0026e6198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6210} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:36:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.753: INFO: Pod "nginx-deployment-55fb7cb77f-5bmnw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5bmnw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-5bmnw,UID:5bcbd88c-70d4-4329-b5f2-17bc6db2ac48,ResourceVersion:25076573,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6347 0xc0026e6348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e63b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e63d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.753: INFO: Pod "nginx-deployment-55fb7cb77f-6v7rm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6v7rm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-6v7rm,UID:8da14203-9b9a-4006-b508-de93c2618b38,ResourceVersion:25076505,Generation:0,CreationTimestamp:2020-02-20 13:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6457 0xc0026e6458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e64c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e64e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.754: INFO: Pod "nginx-deployment-55fb7cb77f-774wh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-774wh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-774wh,UID:a286a3be-e125-41c3-a009-6ed8197af0fd,ResourceVersion:25076533,Generation:0,CreationTimestamp:2020-02-20 13:35:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e65b7 0xc0026e65b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:35:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.754: INFO: Pod "nginx-deployment-55fb7cb77f-78jwc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-78jwc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-78jwc,UID:4405e0ed-6096-4dcd-ae67-db531d05cec8,ResourceVersion:25076504,Generation:0,CreationTimestamp:2020-02-20 13:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6717 0xc0026e6718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e67b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.754: INFO: Pod "nginx-deployment-55fb7cb77f-8jdfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8jdfr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-8jdfr,UID:d4c8691f-300d-46b1-89ed-dfa4b5c84276,ResourceVersion:25076588,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6887 0xc0026e6888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6900} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.754: INFO: Pod "nginx-deployment-55fb7cb77f-9f65c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9f65c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-9f65c,UID:86d8227c-d4d1-4a9b-8ebf-a83b7a4a7658,ResourceVersion:25076564,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e69a7 0xc0026e69a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.755: INFO: Pod "nginx-deployment-55fb7cb77f-c6v72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c6v72,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-c6v72,UID:b003662f-931c-4088-9615-24bc8b3ac335,ResourceVersion:25076584,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6ac7 0xc0026e6ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.755: INFO: Pod "nginx-deployment-55fb7cb77f-hk687" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hk687,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-hk687,UID:cfdc94a7-445b-4486-810a-3f114dcd2016,ResourceVersion:25076598,Generation:0,CreationTimestamp:2020-02-20 13:36:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6bd7 0xc0026e6bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.755: INFO: Pod "nginx-deployment-55fb7cb77f-lcsc5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lcsc5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-lcsc5,UID:b4cb63fc-9314-419f-95ff-84e0cfff37f7,ResourceVersion:25076603,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6cf7 0xc0026e6cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:36:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.755: INFO: Pod "nginx-deployment-55fb7cb77f-ls2k5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ls2k5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-ls2k5,UID:18d0fcc4-1944-4e8e-acf6-3b4dbc3623c1,ResourceVersion:25076581,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6e57 0xc0026e6e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e6ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.755: INFO: Pod "nginx-deployment-55fb7cb77f-w2f7c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w2f7c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-w2f7c,UID:08af323e-3a67-46c4-a41e-89be7d14e819,ResourceVersion:25076516,Generation:0,CreationTimestamp:2020-02-20 13:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e6f67 0xc0026e6f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e6fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.756: INFO: Pod "nginx-deployment-55fb7cb77f-ww2d8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ww2d8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-55fb7cb77f-ww2d8,UID:6a7ed44a-93b0-417e-85dc-da1f4c379713,ResourceVersion:25076587,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a5b251f8-ee3b-4781-9bcf-40ac6f21eb9d 0xc0026e70d7 0xc0026e70d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.756: INFO: Pod "nginx-deployment-7b8c6f4498-2hqjs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2hqjs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-2hqjs,UID:6902902f-afa9-4b4b-a097-6e95d7e7cc9d,ResourceVersion:25076442,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7207 0xc0026e7208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e72a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f4b91bf5144dafeed54f0c3f26768ecbe23324cdf7d6580c99cb062a8cbcbaf0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.756: INFO: Pod "nginx-deployment-7b8c6f4498-4k9b4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4k9b4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-4k9b4,UID:9ad558e3-2c28-49f1-bcb3-b66875989b8e,ResourceVersion:25076583,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7377 0xc0026e7378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e73f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.756: INFO: Pod "nginx-deployment-7b8c6f4498-7955p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7955p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-7955p,UID:7864facb-447d-4b6d-bf45-2ab408e95f12,ResourceVersion:25076623,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e74b7 0xc0026e74b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:36:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.757: INFO: Pod "nginx-deployment-7b8c6f4498-7p2sj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7p2sj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-7p2sj,UID:25e9a64a-40e1-4fb1-b85b-5d2efc11f456,ResourceVersion:25076586,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7617 0xc0026e7618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e76b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.757: INFO: Pod "nginx-deployment-7b8c6f4498-7vlpr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7vlpr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-7vlpr,UID:0d0f502c-a78a-4da0-b728-16a9e7c312cc,ResourceVersion:25076462,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7737 0xc0026e7738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e77a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e77c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://172cf4066cf650e3aa2cb1135cfb3c41922d92481bb19b43a362aa8cbb24f5eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.757: INFO: Pod "nginx-deployment-7b8c6f4498-8g54n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8g54n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-8g54n,UID:c806bee5-8e72-4b93-b52b-23c86ab49315,ResourceVersion:25076468,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7897 0xc0026e7898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7900} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://493c17915cdfc931f5bb5801fb8703e1d90599876edb67c20a0505ee1ddb0c7e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.757: INFO: Pod "nginx-deployment-7b8c6f4498-hjr5t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hjr5t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-hjr5t,UID:43eaa83c-0874-40e3-bb73-a8d56550281c,ResourceVersion:25076582,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e79f7 0xc0026e79f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.757: INFO: Pod "nginx-deployment-7b8c6f4498-kv5zw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kv5zw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-kv5zw,UID:b3361011-fdf7-4f8f-9455-83dc00164853,ResourceVersion:25076580,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7b17 0xc0026e7b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.758: INFO: Pod "nginx-deployment-7b8c6f4498-llj6r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-llj6r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-llj6r,UID:31db2da8-47c6-419a-89ff-15565ca08a8a,ResourceVersion:25076585,Generation:0,CreationTimestamp:2020-02-20 13:36:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7c37 0xc0026e7c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.758: INFO: Pod "nginx-deployment-7b8c6f4498-n5bvj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n5bvj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-n5bvj,UID:c3f78e21-a69e-4f75-bb7e-a5402f4d1700,ResourceVersion:25076570,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7d47 0xc0026e7d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.758: INFO: Pod "nginx-deployment-7b8c6f4498-pz4t6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pz4t6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-pz4t6,UID:cfd1a65e-9480-4dd2-a435-611b6db68bca,ResourceVersion:25076445,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7e57 0xc0026e7e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e7ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e7ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-20 13:35:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://feab4b91a9810fa8be6c0bc7edc0ccaca4139e3ce53a0ff6c4339436d9aec6a8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.758: INFO: Pod "nginx-deployment-7b8c6f4498-qlv95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qlv95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-qlv95,UID:21ece9ac-4d44-4fbf-a522-88b89b7a4c9e,ResourceVersion:25076559,Generation:0,CreationTimestamp:2020-02-20 13:36:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0026e7fc7 0xc0026e7fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:36:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.759: INFO: Pod "nginx-deployment-7b8c6f4498-rh9ph" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rh9ph,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-rh9ph,UID:cc39d625-4cf6-4e5c-a441-0169a8263634,ResourceVersion:25076471,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f41f7 0xc0019f41f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d973c0d4ee4eef2a7418893a87ca5f8497113b34d099ad3ed28cf3ebd936af1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.759: INFO: Pod "nginx-deployment-7b8c6f4498-s2lzc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s2lzc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-s2lzc,UID:9cc97b36-24d7-4ec0-aa9b-03471764b95c,ResourceVersion:25076562,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4507 0xc0019f4508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f45f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.759: INFO: Pod "nginx-deployment-7b8c6f4498-s5dlg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s5dlg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-s5dlg,UID:54e1b0cc-3c23-479d-ab69-06adb90d2afd,ResourceVersion:25076454,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4717 0xc0019f4718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f47f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://065b2bd85d990beae4a4171c60efd1f121884604d1bd122998320b0a22950aed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.759: INFO: Pod "nginx-deployment-7b8c6f4498-tf6q7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tf6q7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-tf6q7,UID:3dcbbb76-5179-48bd-ba08-7b0dfceae840,ResourceVersion:25076451,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4977 0xc0019f4978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://91c277d612d96237452ed231c377ee0bec9420f5d1d2c69c1e1d8919b161dad4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.759: INFO: Pod "nginx-deployment-7b8c6f4498-tkmks" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tkmks,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-tkmks,UID:e31a1813-917d-4c3a-8c04-5090ab7a2635,ResourceVersion:25076596,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4c57 0xc0019f4c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:36:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.760: INFO: Pod "nginx-deployment-7b8c6f4498-wx82k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wx82k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-wx82k,UID:e8475def-98b3-4594-b334-798abd91d5e8,ResourceVersion:25076622,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4db7 0xc0019f4db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:36:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.760: INFO: Pod "nginx-deployment-7b8c6f4498-xbsx5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xbsx5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-xbsx5,UID:10bbdd56-01d4-4f22-9120-5da7300e74d0,ResourceVersion:25076608,Generation:0,CreationTimestamp:2020-02-20 13:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f4f07 0xc0019f4f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f4f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f4f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:36:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-20 13:36:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 13:36:16.760: INFO: Pod "nginx-deployment-7b8c6f4498-zvd5f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zvd5f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5901,SelfLink:/api/v1/namespaces/deployment-5901/pods/nginx-deployment-7b8c6f4498-zvd5f,UID:164f24b8-6022-4014-a32a-8f521df49591,ResourceVersion:25076447,Generation:0,CreationTimestamp:2020-02-20 13:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 06a3e467-8329-4841-86a7-a8419f9b8ba6 0xc0019f5057 0xc0019f5058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z767h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z767h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z767h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019f50d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019f50f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:35:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-20 13:35:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 13:35:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://72ce1ef1f2376d5a4cbbd6ea05f04764c9e823c19304b31ddf57827df6ce382c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:36:16.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5901" for this suite. Feb 20 13:37:05.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:37:06.043: INFO: namespace deployment-5901 deletion completed in 48.091725174s • [SLOW TEST:97.539 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:37:06.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-388573c3-65b9-4618-8189-ca76aebfee3a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:37:20.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1464" for this suite. Feb 20 13:37:42.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:37:42.569: INFO: namespace configmap-1464 deletion completed in 22.143243873s • [SLOW TEST:36.525 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:37:42.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:37:42.696: INFO: Creating deployment "test-recreate-deployment" Feb 20 13:37:42.704: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 20 13:37:42.782: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 20 13:37:44.793: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 20 13:37:44.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:37:46.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:37:48.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717802662, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 13:37:50.805: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 20 13:37:50.825: INFO: Updating deployment test-recreate-deployment Feb 20 13:37:50.825: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 20 13:37:51.151: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6062,SelfLink:/apis/apps/v1/namespaces/deployment-6062/deployments/test-recreate-deployment,UID:3ef2c3e5-8233-4c42-9a98-9b0b6872174c,ResourceVersion:25077011,Generation:2,CreationTimestamp:2020-02-20 13:37:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-20 13:37:51 +0000 UTC 2020-02-20 13:37:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-20 13:37:51 +0000 UTC 2020-02-20 13:37:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 20 13:37:51.155: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6062,SelfLink:/apis/apps/v1/namespaces/deployment-6062/replicasets/test-recreate-deployment-5c8c9cc69d,UID:4d21ed80-f0a9-4864-9453-b8a4a5d4d008,ResourceVersion:25077008,Generation:1,CreationTimestamp:2020-02-20 13:37:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3ef2c3e5-8233-4c42-9a98-9b0b6872174c 0xc00285cc07 0xc00285cc08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:37:51.155: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 20 13:37:51.156: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6062,SelfLink:/apis/apps/v1/namespaces/deployment-6062/replicasets/test-recreate-deployment-6df85df6b9,UID:946d5b18-93dc-4034-96e3-2f63d9c3bcc3,ResourceVersion:25076999,Generation:2,CreationTimestamp:2020-02-20 13:37:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3ef2c3e5-8233-4c42-9a98-9b0b6872174c 0xc00285ce27 0xc00285ce28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 13:37:51.159: INFO: Pod "test-recreate-deployment-5c8c9cc69d-ktmlr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-ktmlr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6062,SelfLink:/api/v1/namespaces/deployment-6062/pods/test-recreate-deployment-5c8c9cc69d-ktmlr,UID:d9626c31-e5d6-4a24-9c83-933a41c85922,ResourceVersion:25077012,Generation:0,CreationTimestamp:2020-02-20 13:37:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 4d21ed80-f0a9-4864-9453-b8a4a5d4d008 0xc00285d727 0xc00285d728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2dwbg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2dwbg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2dwbg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00285d7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00285d7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:37:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:37:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:37:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-20 13:37:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:37:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6062" for this suite. Feb 20 13:37:59.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:37:59.310: INFO: namespace deployment-6062 deletion completed in 8.146730974s • [SLOW TEST:16.740 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:37:59.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:38:05.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-908" for this suite. Feb 20 13:38:11.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:38:11.323: INFO: namespace watch-908 deletion completed in 6.18875151s • [SLOW TEST:12.013 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:38:11.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:38:11.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e" in namespace "projected-3709" to be "success or failure" Feb 20 13:38:11.490: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Pending", Reason="", readiness=false. Elapsed: 88.754536ms Feb 20 13:38:13.498: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096593697s Feb 20 13:38:15.508: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106707152s Feb 20 13:38:17.516: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114904409s Feb 20 13:38:19.525: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123870296s Feb 20 13:38:21.537: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136078057s STEP: Saw pod success Feb 20 13:38:21.537: INFO: Pod "downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e" satisfied condition "success or failure" Feb 20 13:38:21.541: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e container client-container: STEP: delete the pod Feb 20 13:38:21.831: INFO: Waiting for pod downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e to disappear Feb 20 13:38:21.843: INFO: Pod downwardapi-volume-3eecaab9-804b-4b33-894f-9f9feecc188e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:38:21.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3709" for this suite. Feb 20 13:38:27.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:38:28.003: INFO: namespace projected-3709 deletion completed in 6.148052959s • [SLOW TEST:16.679 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:38:28.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fa7f8e75-1258-47ee-add8-f1a322621987 STEP: Creating a pod to test consume configMaps Feb 20 13:38:28.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c" in namespace "configmap-47" to be "success or failure" Feb 20 13:38:28.137: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.813635ms Feb 20 13:38:30.148: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032346298s Feb 20 13:38:32.160: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044661841s Feb 20 13:38:34.172: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056499583s Feb 20 13:38:36.179: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063601803s STEP: Saw pod success Feb 20 13:38:36.179: INFO: Pod "pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c" satisfied condition "success or failure" Feb 20 13:38:36.183: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c container configmap-volume-test: STEP: delete the pod Feb 20 13:38:36.277: INFO: Waiting for pod pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c to disappear Feb 20 13:38:36.369: INFO: Pod pod-configmaps-3e5937b0-2403-4843-ab21-050e6a4b083c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:38:36.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-47" for this suite. Feb 20 13:38:42.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:38:42.510: INFO: namespace configmap-47 deletion completed in 6.129423284s • [SLOW TEST:14.507 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:38:42.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-v29m STEP: Creating a pod to test atomic-volume-subpath Feb 20 13:38:42.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-v29m" in namespace "subpath-1818" to be "success or failure" Feb 20 13:38:42.650: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Pending", Reason="", readiness=false. Elapsed: 49.439343ms Feb 20 13:38:44.656: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055514479s Feb 20 13:38:46.662: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061289265s Feb 20 13:38:48.687: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086884394s Feb 20 13:38:50.697: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 8.096626759s Feb 20 13:38:52.708: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 10.107740682s Feb 20 13:38:54.732: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 12.132063794s Feb 20 13:38:56.748: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 14.147721802s Feb 20 13:38:58.759: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 16.158964307s Feb 20 13:39:00.767: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 18.16689296s Feb 20 13:39:02.776: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 20.176044721s Feb 20 13:39:04.784: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 22.183396707s Feb 20 13:39:06.789: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 24.189058441s Feb 20 13:39:08.797: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Running", Reason="", readiness=true. Elapsed: 26.196722314s Feb 20 13:39:10.853: INFO: Pod "pod-subpath-test-projected-v29m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.252306422s STEP: Saw pod success Feb 20 13:39:10.853: INFO: Pod "pod-subpath-test-projected-v29m" satisfied condition "success or failure" Feb 20 13:39:10.874: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-v29m container test-container-subpath-projected-v29m: STEP: delete the pod Feb 20 13:39:11.031: INFO: Waiting for pod pod-subpath-test-projected-v29m to disappear Feb 20 13:39:11.048: INFO: Pod pod-subpath-test-projected-v29m no longer exists STEP: Deleting pod pod-subpath-test-projected-v29m Feb 20 13:39:11.048: INFO: Deleting pod "pod-subpath-test-projected-v29m" in namespace "subpath-1818" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:39:11.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1818" for this suite. Feb 20 13:39:17.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:39:17.212: INFO: namespace subpath-1818 deletion completed in 6.147736769s • [SLOW TEST:34.701 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:39:17.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:39:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4510" for this suite. Feb 20 13:39:29.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:39:29.833: INFO: namespace namespaces-4510 deletion completed in 6.138096521s STEP: Destroying namespace "nsdeletetest-6849" for this suite. Feb 20 13:39:29.835: INFO: Namespace nsdeletetest-6849 was already deleted STEP: Destroying namespace "nsdeletetest-3102" for this suite. Feb 20 13:39:35.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:39:35.986: INFO: namespace nsdeletetest-3102 deletion completed in 6.151028721s • [SLOW TEST:18.774 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:39:35.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:39:36.079: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade" in namespace "projected-9997" to be "success or failure" Feb 20 13:39:36.087: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Pending", Reason="", readiness=false. Elapsed: 7.799108ms Feb 20 13:39:38.091: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012271753s Feb 20 13:39:40.096: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017385524s Feb 20 13:39:42.116: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037101204s Feb 20 13:39:44.122: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042888072s Feb 20 13:39:46.131: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052157276s STEP: Saw pod success Feb 20 13:39:46.131: INFO: Pod "downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade" satisfied condition "success or failure" Feb 20 13:39:46.136: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade container client-container: STEP: delete the pod Feb 20 13:39:46.207: INFO: Waiting for pod downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade to disappear Feb 20 13:39:46.213: INFO: Pod downwardapi-volume-4a9ffce1-0181-414a-9c42-dee09af48ade no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:39:46.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9997" for this suite. Feb 20 13:39:52.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:39:52.339: INFO: namespace projected-9997 deletion completed in 6.119877566s • [SLOW TEST:16.352 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:39:52.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7cb4f6b6-4fc5-4ace-ba26-1860a542a39d STEP: Creating a pod to test consume secrets Feb 20 13:39:52.448: INFO: Waiting up to 5m0s for pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a" in namespace "secrets-7918" to be "success or failure" Feb 20 13:39:52.455: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.922507ms Feb 20 13:39:54.472: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024040489s Feb 20 13:39:56.485: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03747035s Feb 20 13:39:58.501: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053094294s Feb 20 13:40:00.511: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063547713s STEP: Saw pod success Feb 20 13:40:00.511: INFO: Pod "pod-secrets-1d76660c-159e-4294-81c6-edb96876500a" satisfied condition "success or failure" Feb 20 13:40:00.518: INFO: Trying to get logs from node iruya-node pod pod-secrets-1d76660c-159e-4294-81c6-edb96876500a container secret-volume-test: STEP: delete the pod Feb 20 13:40:00.597: INFO: Waiting for pod pod-secrets-1d76660c-159e-4294-81c6-edb96876500a to disappear Feb 20 13:40:00.604: INFO: Pod pod-secrets-1d76660c-159e-4294-81c6-edb96876500a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:40:00.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7918" for this suite. Feb 20 13:40:06.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:40:06.783: INFO: namespace secrets-7918 deletion completed in 6.17393283s • [SLOW TEST:14.443 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:40:06.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5158 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 13:40:06.841: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 13:40:37.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5158 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:40:37.056: INFO: >>> kubeConfig: /root/.kube/config I0220 13:40:37.127993 8 log.go:172] (0xc00157f760) (0xc002b666e0) Create stream I0220 13:40:37.128052 8 log.go:172] (0xc00157f760) (0xc002b666e0) Stream added, broadcasting: 1 I0220 13:40:37.140855 8 log.go:172] (0xc00157f760) Reply frame received for 1 I0220 13:40:37.140910 8 log.go:172] (0xc00157f760) (0xc002a374a0) Create stream I0220 13:40:37.140917 8 log.go:172] (0xc00157f760) (0xc002a374a0) Stream added, broadcasting: 3 I0220 13:40:37.143848 8 log.go:172] (0xc00157f760) Reply frame received for 3 I0220 13:40:37.143901 8 log.go:172] (0xc00157f760) (0xc002a37540) Create stream I0220 13:40:37.143913 8 log.go:172] (0xc00157f760) (0xc002a37540) Stream added, broadcasting: 5 I0220 13:40:37.147718 8 log.go:172] (0xc00157f760) Reply frame received for 5 I0220 13:40:37.487233 8 log.go:172] (0xc00157f760) Data frame received for 3 I0220 13:40:37.487320 8 log.go:172] (0xc002a374a0) (3) Data frame handling I0220 13:40:37.487398 8 log.go:172] (0xc002a374a0) (3) Data frame sent I0220 13:40:37.648038 8 log.go:172] (0xc00157f760) Data frame received for 1 I0220 13:40:37.648109 8 log.go:172] (0xc002b666e0) (1) Data frame handling I0220 13:40:37.648128 8 log.go:172] (0xc002b666e0) (1) Data frame sent I0220 13:40:37.648202 8 log.go:172] (0xc00157f760) (0xc002a374a0) Stream removed, broadcasting: 3 I0220 13:40:37.648353 8 log.go:172] (0xc00157f760) (0xc002b666e0) Stream removed, broadcasting: 1 I0220 13:40:37.648486 8 log.go:172] (0xc00157f760) (0xc002a37540) Stream removed, broadcasting: 5 I0220 13:40:37.648568 8 log.go:172] (0xc00157f760) (0xc002b666e0) Stream removed, broadcasting: 1 I0220 13:40:37.648587 8 log.go:172] (0xc00157f760) (0xc002a374a0) Stream removed, broadcasting: 3 I0220 13:40:37.648607 8 log.go:172] (0xc00157f760) (0xc002a37540) Stream removed, broadcasting: 5 I0220 13:40:37.649429 8 log.go:172] (0xc00157f760) Go away received Feb 20 13:40:37.650: INFO: Waiting for endpoints: map[] Feb 20 13:40:37.655: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5158 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:40:37.655: INFO: >>> kubeConfig: /root/.kube/config I0220 13:40:37.731974 8 log.go:172] (0xc000bf14a0) (0xc002a37720) Create stream I0220 13:40:37.732008 8 log.go:172] (0xc000bf14a0) (0xc002a37720) Stream added, broadcasting: 1 I0220 13:40:37.739614 8 log.go:172] (0xc000bf14a0) Reply frame received for 1 I0220 13:40:37.739715 8 log.go:172] (0xc000bf14a0) (0xc002b668c0) Create stream I0220 13:40:37.739737 8 log.go:172] (0xc000bf14a0) (0xc002b668c0) Stream added, broadcasting: 3 I0220 13:40:37.741630 8 log.go:172] (0xc000bf14a0) Reply frame received for 3 I0220 13:40:37.741664 8 log.go:172] (0xc000bf14a0) (0xc0025d6820) Create stream I0220 13:40:37.741675 8 log.go:172] (0xc000bf14a0) (0xc0025d6820) Stream added, broadcasting: 5 I0220 13:40:37.744989 8 log.go:172] (0xc000bf14a0) Reply frame received for 5 I0220 13:40:37.908504 8 log.go:172] (0xc000bf14a0) Data frame received for 3 I0220 13:40:37.908562 8 log.go:172] (0xc002b668c0) (3) Data frame handling I0220 13:40:37.908575 8 log.go:172] (0xc002b668c0) (3) Data frame sent I0220 13:40:38.083999 8 log.go:172] (0xc000bf14a0) (0xc002b668c0) Stream removed, broadcasting: 3 I0220 13:40:38.084153 8 log.go:172] (0xc000bf14a0) Data frame received for 1 I0220 13:40:38.084177 8 log.go:172] (0xc000bf14a0) (0xc0025d6820) Stream removed, broadcasting: 5 I0220 13:40:38.084202 8 log.go:172] (0xc002a37720) (1) Data frame handling I0220 13:40:38.084236 8 log.go:172] (0xc002a37720) (1) Data frame sent I0220 13:40:38.084262 8 log.go:172] (0xc000bf14a0) (0xc002a37720) Stream removed, broadcasting: 1 I0220 13:40:38.084307 8 log.go:172] (0xc000bf14a0) Go away received I0220 13:40:38.084459 8 log.go:172] (0xc000bf14a0) (0xc002a37720) Stream removed, broadcasting: 1 I0220 13:40:38.084481 8 log.go:172] (0xc000bf14a0) (0xc002b668c0) Stream removed, broadcasting: 3 I0220 13:40:38.084488 8 log.go:172] (0xc000bf14a0) (0xc0025d6820) Stream removed, broadcasting: 5 Feb 20 13:40:38.084: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:40:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5158" for this suite. Feb 20 13:41:00.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:41:00.238: INFO: namespace pod-network-test-5158 deletion completed in 22.144179605s • [SLOW TEST:53.456 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:41:00.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:41:00.353: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:41:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6404" for this suite. Feb 20 13:41:50.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:41:50.636: INFO: namespace pods-6404 deletion completed in 42.150550996s • [SLOW TEST:50.397 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:41:50.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 20 13:41:50.795: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:41:50.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2412" for this suite. Feb 20 13:41:56.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:41:57.080: INFO: namespace kubectl-2412 deletion completed in 6.185871332s • [SLOW TEST:6.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:41:57.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 20 13:41:57.326: INFO: Waiting up to 5m0s for pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70" in namespace "downward-api-1239" to be "success or failure" Feb 20 13:41:57.346: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70": Phase="Pending", Reason="", readiness=false. Elapsed: 20.132248ms Feb 20 13:41:59.355: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028784558s Feb 20 13:42:01.363: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036837223s Feb 20 13:42:03.378: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052238018s Feb 20 13:42:05.392: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066387s STEP: Saw pod success Feb 20 13:42:05.392: INFO: Pod "downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70" satisfied condition "success or failure" Feb 20 13:42:05.398: INFO: Trying to get logs from node iruya-node pod downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70 container dapi-container: STEP: delete the pod Feb 20 13:42:05.622: INFO: Waiting for pod downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70 to disappear Feb 20 13:42:05.652: INFO: Pod downward-api-f5d23952-2708-438b-bfec-b324ccd0ce70 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:42:05.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1239" for this suite. Feb 20 13:42:11.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:42:11.842: INFO: namespace downward-api-1239 deletion completed in 6.179883225s • [SLOW TEST:14.762 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:42:11.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 20 13:42:11.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9525' Feb 20 13:42:14.845: INFO: stderr: "" Feb 20 13:42:14.845: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 13:42:14.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:15.039: INFO: stderr: "" Feb 20 13:42:15.039: INFO: stdout: "update-demo-nautilus-pbf69 update-demo-nautilus-vjlw4 " Feb 20 13:42:15.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbf69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:15.202: INFO: stderr: "" Feb 20 13:42:15.202: INFO: stdout: "" Feb 20 13:42:15.202: INFO: update-demo-nautilus-pbf69 is created but not running Feb 20 13:42:20.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:21.884: INFO: stderr: "" Feb 20 13:42:21.884: INFO: stdout: "update-demo-nautilus-pbf69 update-demo-nautilus-vjlw4 " Feb 20 13:42:21.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbf69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:22.405: INFO: stderr: "" Feb 20 13:42:22.405: INFO: stdout: "" Feb 20 13:42:22.405: INFO: update-demo-nautilus-pbf69 is created but not running Feb 20 13:42:27.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:27.562: INFO: stderr: "" Feb 20 13:42:27.562: INFO: stdout: "update-demo-nautilus-pbf69 update-demo-nautilus-vjlw4 " Feb 20 13:42:27.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbf69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:27.701: INFO: stderr: "" Feb 20 13:42:27.701: INFO: stdout: "true" Feb 20 13:42:27.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbf69 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:27.789: INFO: stderr: "" Feb 20 13:42:27.789: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 13:42:27.789: INFO: validating pod update-demo-nautilus-pbf69 Feb 20 13:42:27.859: INFO: got data: { "image": "nautilus.jpg" } Feb 20 13:42:27.860: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 13:42:27.860: INFO: update-demo-nautilus-pbf69 is verified up and running Feb 20 13:42:27.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:27.947: INFO: stderr: "" Feb 20 13:42:27.947: INFO: stdout: "true" Feb 20 13:42:27.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:28.030: INFO: stderr: "" Feb 20 13:42:28.030: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 13:42:28.030: INFO: validating pod update-demo-nautilus-vjlw4 Feb 20 13:42:28.048: INFO: got data: { "image": "nautilus.jpg" } Feb 20 13:42:28.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 13:42:28.048: INFO: update-demo-nautilus-vjlw4 is verified up and running STEP: scaling down the replication controller Feb 20 13:42:28.049: INFO: scanned /root for discovery docs: Feb 20 13:42:28.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9525' Feb 20 13:42:29.164: INFO: stderr: "" Feb 20 13:42:29.164: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 13:42:29.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:29.306: INFO: stderr: "" Feb 20 13:42:29.306: INFO: stdout: "update-demo-nautilus-pbf69 update-demo-nautilus-vjlw4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 20 13:42:34.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:34.418: INFO: stderr: "" Feb 20 13:42:34.418: INFO: stdout: "update-demo-nautilus-pbf69 update-demo-nautilus-vjlw4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 20 13:42:39.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:39.589: INFO: stderr: "" Feb 20 13:42:39.590: INFO: stdout: "update-demo-nautilus-vjlw4 " Feb 20 13:42:39.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:39.722: INFO: stderr: "" Feb 20 13:42:39.722: INFO: stdout: "true" Feb 20 13:42:39.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:39.801: INFO: stderr: "" Feb 20 13:42:39.801: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 13:42:39.801: INFO: validating pod update-demo-nautilus-vjlw4 Feb 20 13:42:39.807: INFO: got data: { "image": "nautilus.jpg" } Feb 20 13:42:39.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 13:42:39.807: INFO: update-demo-nautilus-vjlw4 is verified up and running STEP: scaling up the replication controller Feb 20 13:42:39.809: INFO: scanned /root for discovery docs: Feb 20 13:42:39.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9525' Feb 20 13:42:41.022: INFO: stderr: "" Feb 20 13:42:41.022: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 13:42:41.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:41.164: INFO: stderr: "" Feb 20 13:42:41.164: INFO: stdout: "update-demo-nautilus-pbjgq update-demo-nautilus-vjlw4 " Feb 20 13:42:41.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbjgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:41.284: INFO: stderr: "" Feb 20 13:42:41.284: INFO: stdout: "" Feb 20 13:42:41.284: INFO: update-demo-nautilus-pbjgq is created but not running Feb 20 13:42:46.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:46.488: INFO: stderr: "" Feb 20 13:42:46.488: INFO: stdout: "update-demo-nautilus-pbjgq update-demo-nautilus-vjlw4 " Feb 20 13:42:46.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbjgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:46.680: INFO: stderr: "" Feb 20 13:42:46.680: INFO: stdout: "" Feb 20 13:42:46.680: INFO: update-demo-nautilus-pbjgq is created but not running Feb 20 13:42:51.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9525' Feb 20 13:42:51.873: INFO: stderr: "" Feb 20 13:42:51.873: INFO: stdout: "update-demo-nautilus-pbjgq update-demo-nautilus-vjlw4 " Feb 20 13:42:51.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbjgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:52.015: INFO: stderr: "" Feb 20 13:42:52.015: INFO: stdout: "true" Feb 20 13:42:52.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbjgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:52.135: INFO: stderr: "" Feb 20 13:42:52.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 13:42:52.135: INFO: validating pod update-demo-nautilus-pbjgq Feb 20 13:42:52.153: INFO: got data: { "image": "nautilus.jpg" } Feb 20 13:42:52.154: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 13:42:52.154: INFO: update-demo-nautilus-pbjgq is verified up and running Feb 20 13:42:52.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:52.334: INFO: stderr: "" Feb 20 13:42:52.334: INFO: stdout: "true" Feb 20 13:42:52.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjlw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9525' Feb 20 13:42:52.445: INFO: stderr: "" Feb 20 13:42:52.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 13:42:52.445: INFO: validating pod update-demo-nautilus-vjlw4 Feb 20 13:42:52.450: INFO: got data: { "image": "nautilus.jpg" } Feb 20 13:42:52.450: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 13:42:52.450: INFO: update-demo-nautilus-vjlw4 is verified up and running STEP: using delete to clean up resources Feb 20 13:42:52.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9525' Feb 20 13:42:52.553: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 13:42:52.554: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 20 13:42:52.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9525' Feb 20 13:42:52.671: INFO: stderr: "No resources found.\n" Feb 20 13:42:52.671: INFO: stdout: "" Feb 20 13:42:52.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9525 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 13:42:52.832: INFO: stderr: "" Feb 20 13:42:52.832: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:42:52.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9525" for this suite. Feb 20 13:43:14.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:43:15.000: INFO: namespace kubectl-9525 deletion completed in 22.156719921s • [SLOW TEST:63.158 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:43:15.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:43:15.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381" in namespace "downward-api-9830" to be "success or failure" Feb 20 13:43:15.117: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Pending", Reason="", readiness=false. Elapsed: 12.698844ms Feb 20 13:43:17.124: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020179842s Feb 20 13:43:19.138: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033341666s Feb 20 13:43:21.144: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040240181s Feb 20 13:43:23.154: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049338537s Feb 20 13:43:25.160: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055864446s STEP: Saw pod success Feb 20 13:43:25.160: INFO: Pod "downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381" satisfied condition "success or failure" Feb 20 13:43:25.163: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381 container client-container: STEP: delete the pod Feb 20 13:43:25.220: INFO: Waiting for pod downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381 to disappear Feb 20 13:43:25.375: INFO: Pod downwardapi-volume-ee50ebb5-c068-421e-9313-bb8fd992e381 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:43:25.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9830" for this suite. Feb 20 13:43:31.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:43:31.549: INFO: namespace downward-api-9830 deletion completed in 6.163867019s • [SLOW TEST:16.548 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:43:31.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:43:31.901: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b89dce86-52d2-4aae-9a56-4ae5e9cd4bff", Controller:(*bool)(0xc001d4327a), BlockOwnerDeletion:(*bool)(0xc001d4327b)}} Feb 20 13:43:31.918: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2acfd323-629e-47fd-b62a-26158288a72b", Controller:(*bool)(0xc001d4341a), BlockOwnerDeletion:(*bool)(0xc001d4341b)}} Feb 20 13:43:31.941: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a60ce41e-e6e4-43e5-83d8-a5b3ee5376a1", Controller:(*bool)(0xc002bd2832), BlockOwnerDeletion:(*bool)(0xc002bd2833)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:43:37.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2142" for this suite. Feb 20 13:43:43.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:43:43.189: INFO: namespace gc-2142 deletion completed in 6.122635539s • [SLOW TEST:11.640 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:43:43.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 20 13:43:43.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 20 13:43:43.269: INFO: Waiting for terminating namespaces to be deleted... Feb 20 13:43:43.271: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 20 13:43:43.281: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 20 13:43:43.281: INFO: Container weave ready: true, restart count 0 Feb 20 13:43:43.281: INFO: Container weave-npc ready: true, restart count 0 Feb 20 13:43:43.281: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.281: INFO: Container kube-bench ready: false, restart count 0 Feb 20 13:43:43.281: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.281: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 13:43:43.281: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 20 13:43:43.289: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 13:43:43.289: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 20 13:43:43.289: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container kube-apiserver ready: true, restart count 0 Feb 20 13:43:43.289: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container coredns ready: true, restart count 0 Feb 20 13:43:43.289: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container kube-scheduler ready: true, restart count 15 Feb 20 13:43:43.289: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 20 13:43:43.289: INFO: Container weave ready: true, restart count 0 Feb 20 13:43:43.289: INFO: Container weave-npc ready: true, restart count 0 Feb 20 13:43:43.289: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container coredns ready: true, restart count 0 Feb 20 13:43:43.289: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 20 13:43:43.289: INFO: Container etcd ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-36e62ee7-efd3-4e25-9626-8912ed195649 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-36e62ee7-efd3-4e25-9626-8912ed195649 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-36e62ee7-efd3-4e25-9626-8912ed195649 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:44:01.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7825" for this suite. Feb 20 13:44:21.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:44:21.711: INFO: namespace sched-pred-7825 deletion completed in 20.174963954s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:38.521 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:44:21.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:44:21.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026" in namespace "downward-api-1319" to be "success or failure" Feb 20 13:44:21.838: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Pending", Reason="", readiness=false. Elapsed: 9.711859ms Feb 20 13:44:23.855: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02648842s Feb 20 13:44:25.869: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040198934s Feb 20 13:44:27.880: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051394961s Feb 20 13:44:29.899: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070401302s Feb 20 13:44:31.911: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082262801s STEP: Saw pod success Feb 20 13:44:31.911: INFO: Pod "downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026" satisfied condition "success or failure" Feb 20 13:44:31.919: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026 container client-container: STEP: delete the pod Feb 20 13:44:32.005: INFO: Waiting for pod downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026 to disappear Feb 20 13:44:32.076: INFO: Pod downwardapi-volume-afac14d9-97fa-4e79-8df7-3a1325b97026 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:44:32.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1319" for this suite. Feb 20 13:44:38.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:44:38.265: INFO: namespace downward-api-1319 deletion completed in 6.178856383s • [SLOW TEST:16.553 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:44:38.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 20 13:44:38.416: INFO: Waiting up to 5m0s for pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343" in namespace "emptydir-135" to be "success or failure" Feb 20 13:44:38.438: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343": Phase="Pending", Reason="", readiness=false. Elapsed: 22.007859ms Feb 20 13:44:40.452: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035427656s Feb 20 13:44:43.472: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343": Phase="Pending", Reason="", readiness=false. Elapsed: 5.055566111s Feb 20 13:44:45.489: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343": Phase="Pending", Reason="", readiness=false. Elapsed: 7.07278042s Feb 20 13:44:47.497: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.080364217s STEP: Saw pod success Feb 20 13:44:47.497: INFO: Pod "pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343" satisfied condition "success or failure" Feb 20 13:44:47.504: INFO: Trying to get logs from node iruya-node pod pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343 container test-container: STEP: delete the pod Feb 20 13:44:47.705: INFO: Waiting for pod pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343 to disappear Feb 20 13:44:47.733: INFO: Pod pod-c9a29c43-b401-4196-ad91-ff9c3cfd8343 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:44:47.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-135" for this suite. Feb 20 13:44:53.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:44:53.981: INFO: namespace emptydir-135 deletion completed in 6.243298732s • [SLOW TEST:15.716 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:44:53.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:44:54.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7" in namespace "downward-api-1859" to be "success or failure" Feb 20 13:44:54.125: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.859218ms Feb 20 13:44:56.136: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023614377s Feb 20 13:44:58.143: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031179968s Feb 20 13:45:00.151: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039130012s Feb 20 13:45:02.169: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0568235s Feb 20 13:45:04.179: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066671711s STEP: Saw pod success Feb 20 13:45:04.179: INFO: Pod "downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7" satisfied condition "success or failure" Feb 20 13:45:04.183: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7 container client-container: STEP: delete the pod Feb 20 13:45:04.254: INFO: Waiting for pod downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7 to disappear Feb 20 13:45:04.334: INFO: Pod downwardapi-volume-91c558a7-a23e-4cac-bf4d-abeb8c0e77b7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:45:04.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1859" for this suite. Feb 20 13:45:10.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:45:10.498: INFO: namespace downward-api-1859 deletion completed in 6.15801173s • [SLOW TEST:16.517 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:45:10.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-09796ad0-e075-417c-8eef-712844abac80 STEP: Creating a pod to test consume configMaps Feb 20 13:45:10.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c" in namespace "projected-8464" to be "success or failure" Feb 20 13:45:10.671: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.646554ms Feb 20 13:45:12.684: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048937581s Feb 20 13:45:14.708: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072562291s Feb 20 13:45:16.723: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087907139s Feb 20 13:45:18.731: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095650643s STEP: Saw pod success Feb 20 13:45:18.731: INFO: Pod "pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c" satisfied condition "success or failure" Feb 20 13:45:18.735: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c container projected-configmap-volume-test: STEP: delete the pod Feb 20 13:45:19.572: INFO: Waiting for pod pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c to disappear Feb 20 13:45:19.589: INFO: Pod pod-projected-configmaps-1f71e3ba-ce4e-4033-95bb-b6c4f48e917c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:45:19.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8464" for this suite. Feb 20 13:45:25.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:45:25.856: INFO: namespace projected-8464 deletion completed in 6.235910604s • [SLOW TEST:15.357 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:45:25.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 20 13:45:26.225: INFO: Waiting up to 5m0s for pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6" in namespace "containers-451" to be "success or failure" Feb 20 13:45:26.299: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6": Phase="Pending", Reason="", readiness=false. Elapsed: 74.357988ms Feb 20 13:45:28.305: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079838656s Feb 20 13:45:30.322: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096486597s Feb 20 13:45:32.357: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131570326s Feb 20 13:45:34.432: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.206507318s STEP: Saw pod success Feb 20 13:45:34.432: INFO: Pod "client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6" satisfied condition "success or failure" Feb 20 13:45:34.437: INFO: Trying to get logs from node iruya-node pod client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6 container test-container: STEP: delete the pod Feb 20 13:45:34.503: INFO: Waiting for pod client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6 to disappear Feb 20 13:45:34.510: INFO: Pod client-containers-3bb752de-dc38-4f79-81c4-9bfdae8b16e6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:45:34.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-451" for this suite. Feb 20 13:45:40.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:45:40.688: INFO: namespace containers-451 deletion completed in 6.172594367s • [SLOW TEST:14.832 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:45:40.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 20 13:45:40.793: INFO: Waiting up to 5m0s for pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801" in namespace "emptydir-5277" to be "success or failure" Feb 20 13:45:40.823: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801": Phase="Pending", Reason="", readiness=false. Elapsed: 29.672819ms Feb 20 13:45:42.837: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044088933s Feb 20 13:45:44.845: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051718027s Feb 20 13:45:46.880: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086694206s Feb 20 13:45:48.887: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094207586s STEP: Saw pod success Feb 20 13:45:48.887: INFO: Pod "pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801" satisfied condition "success or failure" Feb 20 13:45:48.892: INFO: Trying to get logs from node iruya-node pod pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801 container test-container: STEP: delete the pod Feb 20 13:45:48.955: INFO: Waiting for pod pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801 to disappear Feb 20 13:45:48.970: INFO: Pod pod-69bb3bd9-a6d0-4b5a-9bb3-4773bbd92801 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:45:48.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5277" for this suite. Feb 20 13:45:55.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:45:55.171: INFO: namespace emptydir-5277 deletion completed in 6.195393382s • [SLOW TEST:14.482 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:45:55.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 20 13:45:55.241: INFO: Waiting up to 5m0s for pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179" in namespace "var-expansion-5803" to be "success or failure" Feb 20 13:45:55.248: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179": Phase="Pending", Reason="", readiness=false. Elapsed: 7.389047ms Feb 20 13:45:57.255: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014160016s Feb 20 13:45:59.263: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022020393s Feb 20 13:46:01.273: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031957985s Feb 20 13:46:03.281: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04029963s STEP: Saw pod success Feb 20 13:46:03.281: INFO: Pod "var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179" satisfied condition "success or failure" Feb 20 13:46:03.285: INFO: Trying to get logs from node iruya-node pod var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179 container dapi-container: STEP: delete the pod Feb 20 13:46:03.345: INFO: Waiting for pod var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179 to disappear Feb 20 13:46:03.350: INFO: Pod var-expansion-f21b439b-6c02-4bb9-9940-11d1187e2179 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:46:03.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5803" for this suite. Feb 20 13:46:09.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:46:09.603: INFO: namespace var-expansion-5803 deletion completed in 6.248698876s • [SLOW TEST:14.432 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:46:09.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 20 13:46:35.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:35.850: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:35.933854 8 log.go:172] (0xc0024b2a50) (0xc003289860) Create stream I0220 13:46:35.933907 8 log.go:172] (0xc0024b2a50) (0xc003289860) Stream added, broadcasting: 1 I0220 13:46:35.942832 8 log.go:172] (0xc0024b2a50) Reply frame received for 1 I0220 13:46:35.942889 8 log.go:172] (0xc0024b2a50) (0xc003289900) Create stream I0220 13:46:35.942896 8 log.go:172] (0xc0024b2a50) (0xc003289900) Stream added, broadcasting: 3 I0220 13:46:35.944242 8 log.go:172] (0xc0024b2a50) Reply frame received for 3 I0220 13:46:35.944270 8 log.go:172] (0xc0024b2a50) (0xc002e1a0a0) Create stream I0220 13:46:35.944277 8 log.go:172] (0xc0024b2a50) (0xc002e1a0a0) Stream added, broadcasting: 5 I0220 13:46:35.945912 8 log.go:172] (0xc0024b2a50) Reply frame received for 5 I0220 13:46:36.083293 8 log.go:172] (0xc0024b2a50) Data frame received for 3 I0220 13:46:36.083370 8 log.go:172] (0xc003289900) (3) Data frame handling I0220 13:46:36.083402 8 log.go:172] (0xc003289900) (3) Data frame sent I0220 13:46:36.226107 8 log.go:172] (0xc0024b2a50) (0xc003289900) Stream removed, broadcasting: 3 I0220 13:46:36.226194 8 log.go:172] (0xc0024b2a50) Data frame received for 1 I0220 13:46:36.226337 8 log.go:172] (0xc0024b2a50) (0xc002e1a0a0) Stream removed, broadcasting: 5 I0220 13:46:36.226370 8 log.go:172] (0xc003289860) (1) Data frame handling I0220 13:46:36.226377 8 log.go:172] (0xc003289860) (1) Data frame sent I0220 13:46:36.226383 8 log.go:172] (0xc0024b2a50) (0xc003289860) Stream removed, broadcasting: 1 I0220 13:46:36.226395 8 log.go:172] (0xc0024b2a50) Go away received I0220 13:46:36.226672 8 log.go:172] (0xc0024b2a50) (0xc003289860) Stream removed, broadcasting: 1 I0220 13:46:36.226713 8 log.go:172] (0xc0024b2a50) (0xc003289900) Stream removed, broadcasting: 3 I0220 13:46:36.226718 8 log.go:172] (0xc0024b2a50) (0xc002e1a0a0) Stream removed, broadcasting: 5 Feb 20 13:46:36.226: INFO: Exec stderr: "" Feb 20 13:46:36.226: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:36.226: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:36.319566 8 log.go:172] (0xc0024b3760) (0xc003289c20) Create stream I0220 13:46:36.319793 8 log.go:172] (0xc0024b3760) (0xc003289c20) Stream added, broadcasting: 1 I0220 13:46:36.330304 8 log.go:172] (0xc0024b3760) Reply frame received for 1 I0220 13:46:36.330412 8 log.go:172] (0xc0024b3760) (0xc002cee5a0) Create stream I0220 13:46:36.330439 8 log.go:172] (0xc0024b3760) (0xc002cee5a0) Stream added, broadcasting: 3 I0220 13:46:36.334070 8 log.go:172] (0xc0024b3760) Reply frame received for 3 I0220 13:46:36.334172 8 log.go:172] (0xc0024b3760) (0xc002cee640) Create stream I0220 13:46:36.334184 8 log.go:172] (0xc0024b3760) (0xc002cee640) Stream added, broadcasting: 5 I0220 13:46:36.338182 8 log.go:172] (0xc0024b3760) Reply frame received for 5 I0220 13:46:36.438172 8 log.go:172] (0xc0024b3760) Data frame received for 3 I0220 13:46:36.438216 8 log.go:172] (0xc002cee5a0) (3) Data frame handling I0220 13:46:36.438232 8 log.go:172] (0xc002cee5a0) (3) Data frame sent I0220 13:46:36.679280 8 log.go:172] (0xc0024b3760) (0xc002cee5a0) Stream removed, broadcasting: 3 I0220 13:46:36.679393 8 log.go:172] (0xc0024b3760) Data frame received for 1 I0220 13:46:36.679405 8 log.go:172] (0xc003289c20) (1) Data frame handling I0220 13:46:36.679418 8 log.go:172] (0xc003289c20) (1) Data frame sent I0220 13:46:36.679498 8 log.go:172] (0xc0024b3760) (0xc003289c20) Stream removed, broadcasting: 1 I0220 13:46:36.679705 8 log.go:172] (0xc0024b3760) (0xc002cee640) Stream removed, broadcasting: 5 I0220 13:46:36.679767 8 log.go:172] (0xc0024b3760) (0xc003289c20) Stream removed, broadcasting: 1 I0220 13:46:36.679786 8 log.go:172] (0xc0024b3760) (0xc002cee5a0) Stream removed, broadcasting: 3 I0220 13:46:36.679798 8 log.go:172] (0xc0024b3760) (0xc002cee640) Stream removed, broadcasting: 5 I0220 13:46:36.680163 8 log.go:172] (0xc0024b3760) Go away received Feb 20 13:46:36.680: INFO: Exec stderr: "" Feb 20 13:46:36.680: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:36.680: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:36.755035 8 log.go:172] (0xc001cceb00) (0xc002e1a280) Create stream I0220 13:46:36.755102 8 log.go:172] (0xc001cceb00) (0xc002e1a280) Stream added, broadcasting: 1 I0220 13:46:36.767982 8 log.go:172] (0xc001cceb00) Reply frame received for 1 I0220 13:46:36.768031 8 log.go:172] (0xc001cceb00) (0xc003289cc0) Create stream I0220 13:46:36.768039 8 log.go:172] (0xc001cceb00) (0xc003289cc0) Stream added, broadcasting: 3 I0220 13:46:36.769598 8 log.go:172] (0xc001cceb00) Reply frame received for 3 I0220 13:46:36.769628 8 log.go:172] (0xc001cceb00) (0xc002cee820) Create stream I0220 13:46:36.769637 8 log.go:172] (0xc001cceb00) (0xc002cee820) Stream added, broadcasting: 5 I0220 13:46:36.770845 8 log.go:172] (0xc001cceb00) Reply frame received for 5 I0220 13:46:36.936620 8 log.go:172] (0xc001cceb00) Data frame received for 3 I0220 13:46:36.936689 8 log.go:172] (0xc003289cc0) (3) Data frame handling I0220 13:46:36.936708 8 log.go:172] (0xc003289cc0) (3) Data frame sent I0220 13:46:37.148352 8 log.go:172] (0xc001cceb00) Data frame received for 1 I0220 13:46:37.148438 8 log.go:172] (0xc002e1a280) (1) Data frame handling I0220 13:46:37.148453 8 log.go:172] (0xc002e1a280) (1) Data frame sent I0220 13:46:37.149121 8 log.go:172] (0xc001cceb00) (0xc002e1a280) Stream removed, broadcasting: 1 I0220 13:46:37.149161 8 log.go:172] (0xc001cceb00) (0xc003289cc0) Stream removed, broadcasting: 3 I0220 13:46:37.149212 8 log.go:172] (0xc001cceb00) (0xc002cee820) Stream removed, broadcasting: 5 I0220 13:46:37.149232 8 log.go:172] (0xc001cceb00) Go away received I0220 13:46:37.149319 8 log.go:172] (0xc001cceb00) (0xc002e1a280) Stream removed, broadcasting: 1 I0220 13:46:37.149335 8 log.go:172] (0xc001cceb00) (0xc003289cc0) Stream removed, broadcasting: 3 I0220 13:46:37.149349 8 log.go:172] (0xc001cceb00) (0xc002cee820) Stream removed, broadcasting: 5 Feb 20 13:46:37.149: INFO: Exec stderr: "" Feb 20 13:46:37.149: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:37.149: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:37.204149 8 log.go:172] (0xc0030fbb80) (0xc00327fb80) Create stream I0220 13:46:37.204191 8 log.go:172] (0xc0030fbb80) (0xc00327fb80) Stream added, broadcasting: 1 I0220 13:46:37.209155 8 log.go:172] (0xc0030fbb80) Reply frame received for 1 I0220 13:46:37.209175 8 log.go:172] (0xc0030fbb80) (0xc002e94780) Create stream I0220 13:46:37.209182 8 log.go:172] (0xc0030fbb80) (0xc002e94780) Stream added, broadcasting: 3 I0220 13:46:37.210301 8 log.go:172] (0xc0030fbb80) Reply frame received for 3 I0220 13:46:37.210321 8 log.go:172] (0xc0030fbb80) (0xc002e1a320) Create stream I0220 13:46:37.210328 8 log.go:172] (0xc0030fbb80) (0xc002e1a320) Stream added, broadcasting: 5 I0220 13:46:37.211425 8 log.go:172] (0xc0030fbb80) Reply frame received for 5 I0220 13:46:37.322145 8 log.go:172] (0xc0030fbb80) Data frame received for 3 I0220 13:46:37.322256 8 log.go:172] (0xc002e94780) (3) Data frame handling I0220 13:46:37.322297 8 log.go:172] (0xc002e94780) (3) Data frame sent I0220 13:46:37.480053 8 log.go:172] (0xc0030fbb80) Data frame received for 1 I0220 13:46:37.480142 8 log.go:172] (0xc0030fbb80) (0xc002e1a320) Stream removed, broadcasting: 5 I0220 13:46:37.480201 8 log.go:172] (0xc00327fb80) (1) Data frame handling I0220 13:46:37.480216 8 log.go:172] (0xc00327fb80) (1) Data frame sent I0220 13:46:37.480243 8 log.go:172] (0xc0030fbb80) (0xc002e94780) Stream removed, broadcasting: 3 I0220 13:46:37.480294 8 log.go:172] (0xc0030fbb80) (0xc00327fb80) Stream removed, broadcasting: 1 I0220 13:46:37.480303 8 log.go:172] (0xc0030fbb80) Go away received I0220 13:46:37.480445 8 log.go:172] (0xc0030fbb80) (0xc00327fb80) Stream removed, broadcasting: 1 I0220 13:46:37.480455 8 log.go:172] (0xc0030fbb80) (0xc002e94780) Stream removed, broadcasting: 3 I0220 13:46:37.480458 8 log.go:172] (0xc0030fbb80) (0xc002e1a320) Stream removed, broadcasting: 5 Feb 20 13:46:37.480: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 20 13:46:37.480: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:37.480: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:37.569052 8 log.go:172] (0xc0032fa790) (0xc00327fea0) Create stream I0220 13:46:37.569160 8 log.go:172] (0xc0032fa790) (0xc00327fea0) Stream added, broadcasting: 1 I0220 13:46:37.580475 8 log.go:172] (0xc0032fa790) Reply frame received for 1 I0220 13:46:37.580538 8 log.go:172] (0xc0032fa790) (0xc002e1a3c0) Create stream I0220 13:46:37.580547 8 log.go:172] (0xc0032fa790) (0xc002e1a3c0) Stream added, broadcasting: 3 I0220 13:46:37.582702 8 log.go:172] (0xc0032fa790) Reply frame received for 3 I0220 13:46:37.582757 8 log.go:172] (0xc0032fa790) (0xc002cee8c0) Create stream I0220 13:46:37.582772 8 log.go:172] (0xc0032fa790) (0xc002cee8c0) Stream added, broadcasting: 5 I0220 13:46:37.585281 8 log.go:172] (0xc0032fa790) Reply frame received for 5 I0220 13:46:37.723916 8 log.go:172] (0xc0032fa790) Data frame received for 3 I0220 13:46:37.724005 8 log.go:172] (0xc002e1a3c0) (3) Data frame handling I0220 13:46:37.724029 8 log.go:172] (0xc002e1a3c0) (3) Data frame sent I0220 13:46:38.018451 8 log.go:172] (0xc0032fa790) (0xc002e1a3c0) Stream removed, broadcasting: 3 I0220 13:46:38.018642 8 log.go:172] (0xc0032fa790) Data frame received for 1 I0220 13:46:38.018656 8 log.go:172] (0xc00327fea0) (1) Data frame handling I0220 13:46:38.018669 8 log.go:172] (0xc00327fea0) (1) Data frame sent I0220 13:46:38.018755 8 log.go:172] (0xc0032fa790) (0xc00327fea0) Stream removed, broadcasting: 1 I0220 13:46:38.018854 8 log.go:172] (0xc0032fa790) (0xc002cee8c0) Stream removed, broadcasting: 5 I0220 13:46:38.018960 8 log.go:172] (0xc0032fa790) (0xc00327fea0) Stream removed, broadcasting: 1 I0220 13:46:38.018965 8 log.go:172] (0xc0032fa790) (0xc002e1a3c0) Stream removed, broadcasting: 3 I0220 13:46:38.018969 8 log.go:172] (0xc0032fa790) (0xc002cee8c0) Stream removed, broadcasting: 5 Feb 20 13:46:38.019: INFO: Exec stderr: "" Feb 20 13:46:38.019: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:38.019: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:38.022592 8 log.go:172] (0xc0032fa790) Go away received I0220 13:46:38.251670 8 log.go:172] (0xc003262e70) (0xc002ceea00) Create stream I0220 13:46:38.251776 8 log.go:172] (0xc003262e70) (0xc002ceea00) Stream added, broadcasting: 1 I0220 13:46:38.280577 8 log.go:172] (0xc003262e70) Reply frame received for 1 I0220 13:46:38.280703 8 log.go:172] (0xc003262e70) (0xc003289ea0) Create stream I0220 13:46:38.280715 8 log.go:172] (0xc003262e70) (0xc003289ea0) Stream added, broadcasting: 3 I0220 13:46:38.283043 8 log.go:172] (0xc003262e70) Reply frame received for 3 I0220 13:46:38.283063 8 log.go:172] (0xc003262e70) (0xc002ceeaa0) Create stream I0220 13:46:38.283069 8 log.go:172] (0xc003262e70) (0xc002ceeaa0) Stream added, broadcasting: 5 I0220 13:46:38.284773 8 log.go:172] (0xc003262e70) Reply frame received for 5 I0220 13:46:38.408836 8 log.go:172] (0xc003262e70) Data frame received for 3 I0220 13:46:38.408864 8 log.go:172] (0xc003289ea0) (3) Data frame handling I0220 13:46:38.408882 8 log.go:172] (0xc003289ea0) (3) Data frame sent I0220 13:46:38.554020 8 log.go:172] (0xc003262e70) (0xc003289ea0) Stream removed, broadcasting: 3 I0220 13:46:38.554139 8 log.go:172] (0xc003262e70) Data frame received for 1 I0220 13:46:38.554170 8 log.go:172] (0xc002ceea00) (1) Data frame handling I0220 13:46:38.554194 8 log.go:172] (0xc002ceea00) (1) Data frame sent I0220 13:46:38.554208 8 log.go:172] (0xc003262e70) (0xc002ceeaa0) Stream removed, broadcasting: 5 I0220 13:46:38.554249 8 log.go:172] (0xc003262e70) (0xc002ceea00) Stream removed, broadcasting: 1 I0220 13:46:38.554400 8 log.go:172] (0xc003262e70) Go away received I0220 13:46:38.554581 8 log.go:172] (0xc003262e70) (0xc002ceea00) Stream removed, broadcasting: 1 I0220 13:46:38.554696 8 log.go:172] (0xc003262e70) (0xc003289ea0) Stream removed, broadcasting: 3 I0220 13:46:38.554730 8 log.go:172] (0xc003262e70) (0xc002ceeaa0) Stream removed, broadcasting: 5 Feb 20 13:46:38.554: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 20 13:46:38.554: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:38.554: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:38.623806 8 log.go:172] (0xc002bb7ef0) (0xc002e94aa0) Create stream I0220 13:46:38.623846 8 log.go:172] (0xc002bb7ef0) (0xc002e94aa0) Stream added, broadcasting: 1 I0220 13:46:38.641064 8 log.go:172] (0xc002bb7ef0) Reply frame received for 1 I0220 13:46:38.641154 8 log.go:172] (0xc002bb7ef0) (0xc001a6a0a0) Create stream I0220 13:46:38.641178 8 log.go:172] (0xc002bb7ef0) (0xc001a6a0a0) Stream added, broadcasting: 3 I0220 13:46:38.643013 8 log.go:172] (0xc002bb7ef0) Reply frame received for 3 I0220 13:46:38.643044 8 log.go:172] (0xc002bb7ef0) (0xc0010d20a0) Create stream I0220 13:46:38.643054 8 log.go:172] (0xc002bb7ef0) (0xc0010d20a0) Stream added, broadcasting: 5 I0220 13:46:38.644334 8 log.go:172] (0xc002bb7ef0) Reply frame received for 5 I0220 13:46:38.747637 8 log.go:172] (0xc002bb7ef0) Data frame received for 3 I0220 13:46:38.747692 8 log.go:172] (0xc001a6a0a0) (3) Data frame handling I0220 13:46:38.747758 8 log.go:172] (0xc001a6a0a0) (3) Data frame sent I0220 13:46:38.865044 8 log.go:172] (0xc002bb7ef0) Data frame received for 1 I0220 13:46:38.865093 8 log.go:172] (0xc002bb7ef0) (0xc001a6a0a0) Stream removed, broadcasting: 3 I0220 13:46:38.865141 8 log.go:172] (0xc002e94aa0) (1) Data frame handling I0220 13:46:38.865166 8 log.go:172] (0xc002e94aa0) (1) Data frame sent I0220 13:46:38.865181 8 log.go:172] (0xc002bb7ef0) (0xc002e94aa0) Stream removed, broadcasting: 1 I0220 13:46:38.865668 8 log.go:172] (0xc002bb7ef0) (0xc0010d20a0) Stream removed, broadcasting: 5 I0220 13:46:38.865749 8 log.go:172] (0xc002bb7ef0) Go away received I0220 13:46:38.865844 8 log.go:172] (0xc002bb7ef0) (0xc002e94aa0) Stream removed, broadcasting: 1 I0220 13:46:38.865865 8 log.go:172] (0xc002bb7ef0) (0xc001a6a0a0) Stream removed, broadcasting: 3 I0220 13:46:38.865876 8 log.go:172] (0xc002bb7ef0) (0xc0010d20a0) Stream removed, broadcasting: 5 Feb 20 13:46:38.865: INFO: Exec stderr: "" Feb 20 13:46:38.865: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:38.865: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:38.929337 8 log.go:172] (0xc00157e6e0) (0xc001a6a320) Create stream I0220 13:46:38.929423 8 log.go:172] (0xc00157e6e0) (0xc001a6a320) Stream added, broadcasting: 1 I0220 13:46:38.953534 8 log.go:172] (0xc00157e6e0) Reply frame received for 1 I0220 13:46:38.953583 8 log.go:172] (0xc00157e6e0) (0xc0010d2320) Create stream I0220 13:46:38.953597 8 log.go:172] (0xc00157e6e0) (0xc0010d2320) Stream added, broadcasting: 3 I0220 13:46:38.955963 8 log.go:172] (0xc00157e6e0) Reply frame received for 3 I0220 13:46:38.955999 8 log.go:172] (0xc00157e6e0) (0xc002b4a000) Create stream I0220 13:46:38.956014 8 log.go:172] (0xc00157e6e0) (0xc002b4a000) Stream added, broadcasting: 5 I0220 13:46:38.958889 8 log.go:172] (0xc00157e6e0) Reply frame received for 5 I0220 13:46:39.125542 8 log.go:172] (0xc00157e6e0) Data frame received for 3 I0220 13:46:39.125589 8 log.go:172] (0xc0010d2320) (3) Data frame handling I0220 13:46:39.125604 8 log.go:172] (0xc0010d2320) (3) Data frame sent I0220 13:46:39.267499 8 log.go:172] (0xc00157e6e0) (0xc002b4a000) Stream removed, broadcasting: 5 I0220 13:46:39.267601 8 log.go:172] (0xc00157e6e0) (0xc0010d2320) Stream removed, broadcasting: 3 I0220 13:46:39.267621 8 log.go:172] (0xc00157e6e0) Data frame received for 1 I0220 13:46:39.267627 8 log.go:172] (0xc001a6a320) (1) Data frame handling I0220 13:46:39.267642 8 log.go:172] (0xc001a6a320) (1) Data frame sent I0220 13:46:39.267648 8 log.go:172] (0xc00157e6e0) (0xc001a6a320) Stream removed, broadcasting: 1 I0220 13:46:39.267657 8 log.go:172] (0xc00157e6e0) Go away received I0220 13:46:39.267896 8 log.go:172] (0xc00157e6e0) (0xc001a6a320) Stream removed, broadcasting: 1 I0220 13:46:39.267935 8 log.go:172] (0xc00157e6e0) (0xc0010d2320) Stream removed, broadcasting: 3 I0220 13:46:39.267953 8 log.go:172] (0xc00157e6e0) (0xc002b4a000) Stream removed, broadcasting: 5 Feb 20 13:46:39.267: INFO: Exec stderr: "" Feb 20 13:46:39.268: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:39.268: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:39.309940 8 log.go:172] (0xc0003c09a0) (0xc002b4a320) Create stream I0220 13:46:39.309974 8 log.go:172] (0xc0003c09a0) (0xc002b4a320) Stream added, broadcasting: 1 I0220 13:46:39.313794 8 log.go:172] (0xc0003c09a0) Reply frame received for 1 I0220 13:46:39.313819 8 log.go:172] (0xc0003c09a0) (0xc0010d2460) Create stream I0220 13:46:39.313823 8 log.go:172] (0xc0003c09a0) (0xc0010d2460) Stream added, broadcasting: 3 I0220 13:46:39.314749 8 log.go:172] (0xc0003c09a0) Reply frame received for 3 I0220 13:46:39.314790 8 log.go:172] (0xc0003c09a0) (0xc0010d2500) Create stream I0220 13:46:39.314797 8 log.go:172] (0xc0003c09a0) (0xc0010d2500) Stream added, broadcasting: 5 I0220 13:46:39.317833 8 log.go:172] (0xc0003c09a0) Reply frame received for 5 I0220 13:46:39.397200 8 log.go:172] (0xc0003c09a0) Data frame received for 3 I0220 13:46:39.397315 8 log.go:172] (0xc0010d2460) (3) Data frame handling I0220 13:46:39.397367 8 log.go:172] (0xc0010d2460) (3) Data frame sent I0220 13:46:39.489473 8 log.go:172] (0xc0003c09a0) Data frame received for 1 I0220 13:46:39.489521 8 log.go:172] (0xc002b4a320) (1) Data frame handling I0220 13:46:39.489541 8 log.go:172] (0xc002b4a320) (1) Data frame sent I0220 13:46:39.490128 8 log.go:172] (0xc0003c09a0) (0xc002b4a320) Stream removed, broadcasting: 1 I0220 13:46:39.490998 8 log.go:172] (0xc0003c09a0) (0xc0010d2500) Stream removed, broadcasting: 5 I0220 13:46:39.491073 8 log.go:172] (0xc0003c09a0) (0xc0010d2460) Stream removed, broadcasting: 3 I0220 13:46:39.491110 8 log.go:172] (0xc0003c09a0) (0xc002b4a320) Stream removed, broadcasting: 1 I0220 13:46:39.491117 8 log.go:172] (0xc0003c09a0) (0xc0010d2460) Stream removed, broadcasting: 3 I0220 13:46:39.491130 8 log.go:172] (0xc0003c09a0) (0xc0010d2500) Stream removed, broadcasting: 5 Feb 20 13:46:39.491: INFO: Exec stderr: "" Feb 20 13:46:39.491: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4370 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 13:46:39.491: INFO: >>> kubeConfig: /root/.kube/config I0220 13:46:39.491524 8 log.go:172] (0xc0003c09a0) Go away received I0220 13:46:39.538309 8 log.go:172] (0xc002bb7290) (0xc0010d2960) Create stream I0220 13:46:39.538333 8 log.go:172] (0xc002bb7290) (0xc0010d2960) Stream added, broadcasting: 1 I0220 13:46:39.543688 8 log.go:172] (0xc002bb7290) Reply frame received for 1 I0220 13:46:39.543762 8 log.go:172] (0xc002bb7290) (0xc001a6a3c0) Create stream I0220 13:46:39.543774 8 log.go:172] (0xc002bb7290) (0xc001a6a3c0) Stream added, broadcasting: 3 I0220 13:46:39.548714 8 log.go:172] (0xc002bb7290) Reply frame received for 3 I0220 13:46:39.548747 8 log.go:172] (0xc002bb7290) (0xc001a6a500) Create stream I0220 13:46:39.548752 8 log.go:172] (0xc002bb7290) (0xc001a6a500) Stream added, broadcasting: 5 I0220 13:46:39.550826 8 log.go:172] (0xc002bb7290) Reply frame received for 5 I0220 13:46:39.657200 8 log.go:172] (0xc002bb7290) Data frame received for 3 I0220 13:46:39.657223 8 log.go:172] (0xc001a6a3c0) (3) Data frame handling I0220 13:46:39.657236 8 log.go:172] (0xc001a6a3c0) (3) Data frame sent I0220 13:46:39.771208 8 log.go:172] (0xc002bb7290) Data frame received for 1 I0220 13:46:39.771543 8 log.go:172] (0xc002bb7290) (0xc001a6a3c0) Stream removed, broadcasting: 3 I0220 13:46:39.771737 8 log.go:172] (0xc0010d2960) (1) Data frame handling I0220 13:46:39.771762 8 log.go:172] (0xc0010d2960) (1) Data frame sent I0220 13:46:39.771814 8 log.go:172] (0xc002bb7290) (0xc001a6a500) Stream removed, broadcasting: 5 I0220 13:46:39.771890 8 log.go:172] (0xc002bb7290) (0xc0010d2960) Stream removed, broadcasting: 1 I0220 13:46:39.771919 8 log.go:172] (0xc002bb7290) Go away received I0220 13:46:39.772021 8 log.go:172] (0xc002bb7290) (0xc0010d2960) Stream removed, broadcasting: 1 I0220 13:46:39.772039 8 log.go:172] (0xc002bb7290) (0xc001a6a3c0) Stream removed, broadcasting: 3 I0220 13:46:39.772053 8 log.go:172] (0xc002bb7290) (0xc001a6a500) Stream removed, broadcasting: 5 Feb 20 13:46:39.772: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:46:39.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4370" for this suite. Feb 20 13:47:31.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:47:31.964: INFO: namespace e2e-kubelet-etc-hosts-4370 deletion completed in 52.180646002s • [SLOW TEST:82.361 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:47:31.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-27aa6b45-91f8-4fca-a562-68a75bbe0d59 STEP: Creating a pod to test consume secrets Feb 20 13:47:32.101: INFO: Waiting up to 5m0s for pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea" in namespace "secrets-9415" to be "success or failure" Feb 20 13:47:32.106: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.816674ms Feb 20 13:47:34.121: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019603155s Feb 20 13:47:36.131: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029577559s Feb 20 13:47:38.144: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042712942s Feb 20 13:47:40.168: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066347974s STEP: Saw pod success Feb 20 13:47:40.168: INFO: Pod "pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea" satisfied condition "success or failure" Feb 20 13:47:40.172: INFO: Trying to get logs from node iruya-node pod pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea container secret-env-test: STEP: delete the pod Feb 20 13:47:40.232: INFO: Waiting for pod pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea to disappear Feb 20 13:47:40.242: INFO: Pod pod-secrets-9c8ccebb-38df-49e6-9b07-6630eeeb42ea no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:47:40.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9415" for this suite. Feb 20 13:47:46.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:47:46.571: INFO: namespace secrets-9415 deletion completed in 6.320406266s • [SLOW TEST:14.607 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:47:46.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:48:38.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8305" for this suite. Feb 20 13:48:44.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:48:45.033: INFO: namespace container-runtime-8305 deletion completed in 6.166541653s • [SLOW TEST:58.462 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:48:45.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8659/configmap-test-6bf5d552-1e0a-40b9-ae2b-326723908696 STEP: Creating a pod to test consume configMaps Feb 20 13:48:45.129: INFO: Waiting up to 5m0s for pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529" in namespace "configmap-8659" to be "success or failure" Feb 20 13:48:45.139: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068965ms Feb 20 13:48:47.150: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020589586s Feb 20 13:48:49.160: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030960062s Feb 20 13:48:51.171: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041569115s Feb 20 13:48:53.177: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04812629s STEP: Saw pod success Feb 20 13:48:53.177: INFO: Pod "pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529" satisfied condition "success or failure" Feb 20 13:48:53.179: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529 container env-test: STEP: delete the pod Feb 20 13:48:53.304: INFO: Waiting for pod pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529 to disappear Feb 20 13:48:53.320: INFO: Pod pod-configmaps-fdb10a26-2d7c-4105-837b-253ed2ba6529 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:48:53.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8659" for this suite. Feb 20 13:48:59.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:48:59.642: INFO: namespace configmap-8659 deletion completed in 6.288907168s • [SLOW TEST:14.609 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:48:59.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 13:48:59.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8" in namespace "downward-api-2629" to be "success or failure" Feb 20 13:48:59.770: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Pending", Reason="", readiness=false. Elapsed: 46.976576ms Feb 20 13:49:01.786: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062771623s Feb 20 13:49:03.802: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079348631s Feb 20 13:49:05.814: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090770092s Feb 20 13:49:07.829: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105794474s Feb 20 13:49:09.842: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118500528s STEP: Saw pod success Feb 20 13:49:09.842: INFO: Pod "downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8" satisfied condition "success or failure" Feb 20 13:49:09.847: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8 container client-container: STEP: delete the pod Feb 20 13:49:10.049: INFO: Waiting for pod downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8 to disappear Feb 20 13:49:10.101: INFO: Pod downwardapi-volume-a1dd65bd-8ff5-4c2d-b23c-c667940750a8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:49:10.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2629" for this suite. Feb 20 13:49:16.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:49:16.301: INFO: namespace downward-api-2629 deletion completed in 6.189915384s • [SLOW TEST:16.659 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:49:16.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 20 13:49:16.385: INFO: Waiting up to 5m0s for pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241" in namespace "var-expansion-8301" to be "success or failure" Feb 20 13:49:16.455: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Pending", Reason="", readiness=false. Elapsed: 70.211079ms Feb 20 13:49:18.464: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078944045s Feb 20 13:49:20.476: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091403049s Feb 20 13:49:22.489: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104123566s Feb 20 13:49:25.422: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Pending", Reason="", readiness=false. Elapsed: 9.037234125s Feb 20 13:49:27.432: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.046658542s STEP: Saw pod success Feb 20 13:49:27.432: INFO: Pod "var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241" satisfied condition "success or failure" Feb 20 13:49:27.438: INFO: Trying to get logs from node iruya-node pod var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241 container dapi-container: STEP: delete the pod Feb 20 13:49:27.548: INFO: Waiting for pod var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241 to disappear Feb 20 13:49:27.573: INFO: Pod var-expansion-b5769657-b8c5-4e89-ae76-3420935ef241 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:49:27.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8301" for this suite. Feb 20 13:49:33.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:49:33.726: INFO: namespace var-expansion-8301 deletion completed in 6.146985575s • [SLOW TEST:17.424 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:49:33.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 20 13:49:33.872: INFO: Waiting up to 5m0s for pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d" in namespace "emptydir-26" to be "success or failure" Feb 20 13:49:33.888: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.772975ms Feb 20 13:49:35.897: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024687123s Feb 20 13:49:37.921: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047989255s Feb 20 13:49:39.947: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073960685s Feb 20 13:49:41.956: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083801715s STEP: Saw pod success Feb 20 13:49:41.957: INFO: Pod "pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d" satisfied condition "success or failure" Feb 20 13:49:41.960: INFO: Trying to get logs from node iruya-node pod pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d container test-container: STEP: delete the pod Feb 20 13:49:42.046: INFO: Waiting for pod pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d to disappear Feb 20 13:49:42.052: INFO: Pod pod-4612a236-6a5c-4775-9c1f-7cb855f31e9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:49:42.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-26" for this suite. Feb 20 13:49:48.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:49:48.165: INFO: namespace emptydir-26 deletion completed in 6.108205713s • [SLOW TEST:14.439 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:49:48.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-vhqp STEP: Creating a pod to test atomic-volume-subpath Feb 20 13:49:48.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vhqp" in namespace "subpath-4237" to be "success or failure" Feb 20 13:49:48.307: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235809ms Feb 20 13:49:50.362: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063615567s Feb 20 13:49:52.374: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075150339s Feb 20 13:49:54.381: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082415539s Feb 20 13:49:56.388: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 8.089699097s Feb 20 13:49:58.405: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 10.106040087s Feb 20 13:50:00.413: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 12.114547026s Feb 20 13:50:02.422: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 14.123620904s Feb 20 13:50:04.434: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 16.134997396s Feb 20 13:50:07.402: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 19.103580266s Feb 20 13:50:09.412: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 21.113244993s Feb 20 13:50:11.426: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 23.127508354s Feb 20 13:50:13.436: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 25.137554501s Feb 20 13:50:15.444: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Running", Reason="", readiness=true. Elapsed: 27.145416866s Feb 20 13:50:17.455: INFO: Pod "pod-subpath-test-configmap-vhqp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.156446213s STEP: Saw pod success Feb 20 13:50:17.455: INFO: Pod "pod-subpath-test-configmap-vhqp" satisfied condition "success or failure" Feb 20 13:50:17.460: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-vhqp container test-container-subpath-configmap-vhqp: STEP: delete the pod Feb 20 13:50:17.536: INFO: Waiting for pod pod-subpath-test-configmap-vhqp to disappear Feb 20 13:50:18.161: INFO: Pod pod-subpath-test-configmap-vhqp no longer exists STEP: Deleting pod pod-subpath-test-configmap-vhqp Feb 20 13:50:18.161: INFO: Deleting pod "pod-subpath-test-configmap-vhqp" in namespace "subpath-4237" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:50:18.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4237" for this suite. Feb 20 13:50:24.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:50:24.378: INFO: namespace subpath-4237 deletion completed in 6.194060852s • [SLOW TEST:36.213 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:50:24.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-542fac22-0330-491f-8d10-8f9fa136166a STEP: Creating a pod to test consume configMaps Feb 20 13:50:24.558: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d" in namespace "configmap-7845" to be "success or failure" Feb 20 13:50:24.605: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.495505ms Feb 20 13:50:26.900: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342092607s Feb 20 13:50:28.908: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349636071s Feb 20 13:50:30.923: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365193325s Feb 20 13:50:32.933: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374441085s Feb 20 13:50:34.940: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.38242647s STEP: Saw pod success Feb 20 13:50:34.941: INFO: Pod "pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d" satisfied condition "success or failure" Feb 20 13:50:34.946: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d container configmap-volume-test: STEP: delete the pod Feb 20 13:50:35.155: INFO: Waiting for pod pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d to disappear Feb 20 13:50:35.161: INFO: Pod pod-configmaps-7ba250e8-925f-4ea6-9880-8e32ab78197d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:50:35.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7845" for this suite. Feb 20 13:50:41.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:50:41.357: INFO: namespace configmap-7845 deletion completed in 6.191499547s • [SLOW TEST:16.979 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:50:41.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 20 13:50:41.407: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:51:06.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2003" for this suite. Feb 20 13:51:12.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:51:12.712: INFO: namespace pods-2003 deletion completed in 6.142339563s • [SLOW TEST:31.355 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:51:12.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5071 I0220 13:51:12.787759 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5071, replica count: 1 I0220 13:51:13.838392 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:14.838831 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:15.839146 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:16.839534 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:17.839917 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:18.840358 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:19.840812 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 13:51:20.841243 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 20 13:51:21.004: INFO: Created: latency-svc-k95gl Feb 20 13:51:21.014: INFO: Got endpoints: latency-svc-k95gl [72.730252ms] Feb 20 13:51:21.065: INFO: Created: latency-svc-htq95 Feb 20 13:51:21.137: INFO: Got endpoints: latency-svc-htq95 [122.588445ms] Feb 20 13:51:21.142: INFO: Created: latency-svc-8qcqm Feb 20 13:51:21.156: INFO: Got endpoints: latency-svc-8qcqm [141.573667ms] Feb 20 13:51:21.208: INFO: Created: latency-svc-4pksd Feb 20 13:51:21.217: INFO: Got endpoints: latency-svc-4pksd [202.515095ms] Feb 20 13:51:21.321: INFO: Created: latency-svc-q4w97 Feb 20 13:51:21.405: INFO: Got endpoints: latency-svc-q4w97 [390.402055ms] Feb 20 13:51:21.406: INFO: Created: latency-svc-2x4kp Feb 20 13:51:21.487: INFO: Got endpoints: latency-svc-2x4kp [472.520388ms] Feb 20 13:51:21.530: INFO: Created: latency-svc-9pbhp Feb 20 13:51:21.530: INFO: Got endpoints: latency-svc-9pbhp [515.613759ms] Feb 20 13:51:21.721: INFO: Created: latency-svc-pmg6m Feb 20 13:51:21.758: INFO: Got endpoints: latency-svc-pmg6m [743.304132ms] Feb 20 13:51:21.771: INFO: Created: latency-svc-zppf5 Feb 20 13:51:21.837: INFO: Created: latency-svc-cj9l5 Feb 20 13:51:21.844: INFO: Got endpoints: latency-svc-zppf5 [828.927593ms] Feb 20 13:51:21.845: INFO: Got endpoints: latency-svc-cj9l5 [830.771896ms] Feb 20 13:51:21.891: INFO: Created: latency-svc-jbvwh Feb 20 13:51:21.902: INFO: Got endpoints: latency-svc-jbvwh [887.41991ms] Feb 20 13:51:22.004: INFO: Created: latency-svc-x5tjz Feb 20 13:51:22.010: INFO: Got endpoints: latency-svc-x5tjz [994.676359ms] Feb 20 13:51:22.073: INFO: Created: latency-svc-sx2tq Feb 20 13:51:22.183: INFO: Got endpoints: latency-svc-sx2tq [1.168162723s] Feb 20 13:51:22.203: INFO: Created: latency-svc-wrpvs Feb 20 13:51:22.222: INFO: Got endpoints: latency-svc-wrpvs [1.207527586s] Feb 20 13:51:22.388: INFO: Created: latency-svc-r2lrn Feb 20 13:51:22.413: INFO: Got endpoints: latency-svc-r2lrn [1.397881251s] Feb 20 13:51:22.543: INFO: Created: latency-svc-mblgm Feb 20 13:51:22.606: INFO: Got endpoints: latency-svc-mblgm [1.591309599s] Feb 20 13:51:22.607: INFO: Created: latency-svc-4llcc Feb 20 13:51:22.613: INFO: Got endpoints: latency-svc-4llcc [1.475645392s] Feb 20 13:51:22.705: INFO: Created: latency-svc-g9ph8 Feb 20 13:51:22.736: INFO: Got endpoints: latency-svc-g9ph8 [1.580436746s] Feb 20 13:51:22.792: INFO: Created: latency-svc-5hntd Feb 20 13:51:22.794: INFO: Got endpoints: latency-svc-5hntd [1.577013751s] Feb 20 13:51:22.863: INFO: Created: latency-svc-bccsz Feb 20 13:51:22.926: INFO: Got endpoints: latency-svc-bccsz [1.521459995s] Feb 20 13:51:22.939: INFO: Created: latency-svc-h7nld Feb 20 13:51:23.060: INFO: Got endpoints: latency-svc-h7nld [1.573099316s] Feb 20 13:51:23.111: INFO: Created: latency-svc-qcqfc Feb 20 13:51:23.111: INFO: Created: latency-svc-wtscj Feb 20 13:51:23.128: INFO: Got endpoints: latency-svc-wtscj [1.37038089s] Feb 20 13:51:23.206: INFO: Got endpoints: latency-svc-qcqfc [1.675505737s] Feb 20 13:51:23.234: INFO: Created: latency-svc-vgbgk Feb 20 13:51:23.241: INFO: Got endpoints: latency-svc-vgbgk [1.395696019s] Feb 20 13:51:23.287: INFO: Created: latency-svc-ws8v4 Feb 20 13:51:23.454: INFO: Got endpoints: latency-svc-ws8v4 [247.79304ms] Feb 20 13:51:23.529: INFO: Created: latency-svc-k7kvc Feb 20 13:51:23.633: INFO: Got endpoints: latency-svc-k7kvc [1.789233593s] Feb 20 13:51:23.654: INFO: Created: latency-svc-jqj2f Feb 20 13:51:23.658: INFO: Got endpoints: latency-svc-jqj2f [1.755128949s] Feb 20 13:51:23.907: INFO: Created: latency-svc-txmgh Feb 20 13:51:23.931: INFO: Got endpoints: latency-svc-txmgh [1.92083655s] Feb 20 13:51:24.031: INFO: Created: latency-svc-wwf7h Feb 20 13:51:24.048: INFO: Got endpoints: latency-svc-wwf7h [1.864684598s] Feb 20 13:51:24.105: INFO: Created: latency-svc-lkw29 Feb 20 13:51:24.125: INFO: Got endpoints: latency-svc-lkw29 [1.902910315s] Feb 20 13:51:24.239: INFO: Created: latency-svc-sdf77 Feb 20 13:51:24.255: INFO: Got endpoints: latency-svc-sdf77 [1.841540219s] Feb 20 13:51:24.304: INFO: Created: latency-svc-c9tzj Feb 20 13:51:24.311: INFO: Got endpoints: latency-svc-c9tzj [1.704231837s] Feb 20 13:51:24.504: INFO: Created: latency-svc-rwk6z Feb 20 13:51:24.507: INFO: Got endpoints: latency-svc-rwk6z [1.894097523s] Feb 20 13:51:24.662: INFO: Created: latency-svc-cjkgq Feb 20 13:51:24.670: INFO: Got endpoints: latency-svc-cjkgq [1.933415935s] Feb 20 13:51:24.708: INFO: Created: latency-svc-th9w5 Feb 20 13:51:24.726: INFO: Got endpoints: latency-svc-th9w5 [1.932529288s] Feb 20 13:51:24.916: INFO: Created: latency-svc-rwkcl Feb 20 13:51:24.936: INFO: Got endpoints: latency-svc-rwkcl [2.009474943s] Feb 20 13:51:25.147: INFO: Created: latency-svc-vf9xw Feb 20 13:51:25.167: INFO: Got endpoints: latency-svc-vf9xw [2.106306408s] Feb 20 13:51:25.338: INFO: Created: latency-svc-2dhn7 Feb 20 13:51:25.344: INFO: Got endpoints: latency-svc-2dhn7 [2.215513031s] Feb 20 13:51:25.629: INFO: Created: latency-svc-q626v Feb 20 13:51:25.878: INFO: Got endpoints: latency-svc-q626v [2.636622791s] Feb 20 13:51:25.884: INFO: Created: latency-svc-258qw Feb 20 13:51:25.892: INFO: Got endpoints: latency-svc-258qw [2.438448044s] Feb 20 13:51:26.111: INFO: Created: latency-svc-g5v5x Feb 20 13:51:26.123: INFO: Got endpoints: latency-svc-g5v5x [2.489162782s] Feb 20 13:51:26.345: INFO: Created: latency-svc-x9w5k Feb 20 13:51:26.352: INFO: Got endpoints: latency-svc-x9w5k [2.693933538s] Feb 20 13:51:26.432: INFO: Created: latency-svc-b9lk2 Feb 20 13:51:26.548: INFO: Got endpoints: latency-svc-b9lk2 [2.617392644s] Feb 20 13:51:26.590: INFO: Created: latency-svc-5twxq Feb 20 13:51:26.610: INFO: Got endpoints: latency-svc-5twxq [2.561093984s] Feb 20 13:51:26.725: INFO: Created: latency-svc-9pqzm Feb 20 13:51:26.731: INFO: Got endpoints: latency-svc-9pqzm [2.605168184s] Feb 20 13:51:26.787: INFO: Created: latency-svc-pz9jx Feb 20 13:51:26.799: INFO: Got endpoints: latency-svc-pz9jx [2.54405688s] Feb 20 13:51:26.909: INFO: Created: latency-svc-9gpsr Feb 20 13:51:26.910: INFO: Got endpoints: latency-svc-9gpsr [2.598747544s] Feb 20 13:51:26.967: INFO: Created: latency-svc-xb262 Feb 20 13:51:26.976: INFO: Got endpoints: latency-svc-xb262 [2.469452761s] Feb 20 13:51:27.082: INFO: Created: latency-svc-h6rxl Feb 20 13:51:27.087: INFO: Got endpoints: latency-svc-h6rxl [2.416721632s] Feb 20 13:51:27.128: INFO: Created: latency-svc-vrqnr Feb 20 13:51:27.138: INFO: Got endpoints: latency-svc-vrqnr [2.411373174s] Feb 20 13:51:27.185: INFO: Created: latency-svc-vg2q4 Feb 20 13:51:27.253: INFO: Got endpoints: latency-svc-vg2q4 [2.317090993s] Feb 20 13:51:27.289: INFO: Created: latency-svc-7t56j Feb 20 13:51:27.295: INFO: Got endpoints: latency-svc-7t56j [2.127910839s] Feb 20 13:51:27.485: INFO: Created: latency-svc-9rk7f Feb 20 13:51:27.495: INFO: Got endpoints: latency-svc-9rk7f [2.150765597s] Feb 20 13:51:27.540: INFO: Created: latency-svc-bgldb Feb 20 13:51:27.548: INFO: Got endpoints: latency-svc-bgldb [1.670135126s] Feb 20 13:51:27.714: INFO: Created: latency-svc-stl86 Feb 20 13:51:27.728: INFO: Got endpoints: latency-svc-stl86 [1.835876728s] Feb 20 13:51:27.841: INFO: Created: latency-svc-4wjx9 Feb 20 13:51:27.854: INFO: Got endpoints: latency-svc-4wjx9 [1.730774836s] Feb 20 13:51:27.925: INFO: Created: latency-svc-gwfkw Feb 20 13:51:28.057: INFO: Got endpoints: latency-svc-gwfkw [1.705145017s] Feb 20 13:51:28.067: INFO: Created: latency-svc-5l76k Feb 20 13:51:28.070: INFO: Got endpoints: latency-svc-5l76k [1.521059223s] Feb 20 13:51:28.196: INFO: Created: latency-svc-ngfg6 Feb 20 13:51:28.206: INFO: Got endpoints: latency-svc-ngfg6 [1.595677205s] Feb 20 13:51:28.263: INFO: Created: latency-svc-5zkwf Feb 20 13:51:28.376: INFO: Got endpoints: latency-svc-5zkwf [1.645516867s] Feb 20 13:51:28.390: INFO: Created: latency-svc-sqt5k Feb 20 13:51:28.392: INFO: Got endpoints: latency-svc-sqt5k [1.59259124s] Feb 20 13:51:28.573: INFO: Created: latency-svc-lp8j5 Feb 20 13:51:28.579: INFO: Got endpoints: latency-svc-lp8j5 [1.669275746s] Feb 20 13:51:28.731: INFO: Created: latency-svc-hwg89 Feb 20 13:51:28.774: INFO: Got endpoints: latency-svc-hwg89 [1.797767206s] Feb 20 13:51:28.780: INFO: Created: latency-svc-rwmwl Feb 20 13:51:28.901: INFO: Got endpoints: latency-svc-rwmwl [1.813891901s] Feb 20 13:51:28.911: INFO: Created: latency-svc-p5s42 Feb 20 13:51:28.930: INFO: Got endpoints: latency-svc-p5s42 [1.792336578s] Feb 20 13:51:28.968: INFO: Created: latency-svc-5t7kd Feb 20 13:51:28.988: INFO: Got endpoints: latency-svc-5t7kd [1.734842316s] Feb 20 13:51:29.067: INFO: Created: latency-svc-29r59 Feb 20 13:51:29.080: INFO: Got endpoints: latency-svc-29r59 [1.784648306s] Feb 20 13:51:29.121: INFO: Created: latency-svc-ll4zr Feb 20 13:51:29.211: INFO: Got endpoints: latency-svc-ll4zr [1.716432603s] Feb 20 13:51:29.245: INFO: Created: latency-svc-cz7m6 Feb 20 13:51:29.251: INFO: Got endpoints: latency-svc-cz7m6 [1.702344918s] Feb 20 13:51:29.315: INFO: Created: latency-svc-6nx8g Feb 20 13:51:29.385: INFO: Got endpoints: latency-svc-6nx8g [1.656270347s] Feb 20 13:51:29.474: INFO: Created: latency-svc-jgzvt Feb 20 13:51:29.597: INFO: Got endpoints: latency-svc-jgzvt [1.74281343s] Feb 20 13:51:29.693: INFO: Created: latency-svc-sssps Feb 20 13:51:29.823: INFO: Got endpoints: latency-svc-sssps [1.765779712s] Feb 20 13:51:29.884: INFO: Created: latency-svc-kx2vg Feb 20 13:51:29.895: INFO: Got endpoints: latency-svc-kx2vg [1.825696663s] Feb 20 13:51:30.029: INFO: Created: latency-svc-rmt8g Feb 20 13:51:30.079: INFO: Got endpoints: latency-svc-rmt8g [1.872889026s] Feb 20 13:51:30.084: INFO: Created: latency-svc-p5w54 Feb 20 13:51:30.091: INFO: Got endpoints: latency-svc-p5w54 [1.714250569s] Feb 20 13:51:30.202: INFO: Created: latency-svc-24wbn Feb 20 13:51:30.209: INFO: Got endpoints: latency-svc-24wbn [1.817615694s] Feb 20 13:51:30.280: INFO: Created: latency-svc-sqqn8 Feb 20 13:51:30.288: INFO: Got endpoints: latency-svc-sqqn8 [1.708517423s] Feb 20 13:51:30.439: INFO: Created: latency-svc-xtq8k Feb 20 13:51:30.439: INFO: Got endpoints: latency-svc-xtq8k [1.665133573s] Feb 20 13:51:30.509: INFO: Created: latency-svc-5dxhn Feb 20 13:51:30.685: INFO: Got endpoints: latency-svc-5dxhn [1.784272562s] Feb 20 13:51:30.689: INFO: Created: latency-svc-l8bvz Feb 20 13:51:30.693: INFO: Got endpoints: latency-svc-l8bvz [1.762534806s] Feb 20 13:51:30.833: INFO: Created: latency-svc-w5pkl Feb 20 13:51:30.845: INFO: Got endpoints: latency-svc-w5pkl [1.856616111s] Feb 20 13:51:30.966: INFO: Created: latency-svc-nncg9 Feb 20 13:51:30.967: INFO: Got endpoints: latency-svc-nncg9 [1.88771555s] Feb 20 13:51:31.017: INFO: Created: latency-svc-4k5xp Feb 20 13:51:31.025: INFO: Got endpoints: latency-svc-4k5xp [1.813367999s] Feb 20 13:51:31.128: INFO: Created: latency-svc-xlqhp Feb 20 13:51:31.148: INFO: Got endpoints: latency-svc-xlqhp [1.8974075s] Feb 20 13:51:31.185: INFO: Created: latency-svc-w2q8s Feb 20 13:51:31.194: INFO: Got endpoints: latency-svc-w2q8s [1.808576654s] Feb 20 13:51:31.285: INFO: Created: latency-svc-49nfh Feb 20 13:51:31.311: INFO: Got endpoints: latency-svc-49nfh [1.713560248s] Feb 20 13:51:31.379: INFO: Created: latency-svc-hrdmz Feb 20 13:51:31.454: INFO: Got endpoints: latency-svc-hrdmz [1.631162497s] Feb 20 13:51:31.486: INFO: Created: latency-svc-4rvph Feb 20 13:51:31.497: INFO: Got endpoints: latency-svc-4rvph [1.601475536s] Feb 20 13:51:31.678: INFO: Created: latency-svc-hg9pw Feb 20 13:51:31.686: INFO: Got endpoints: latency-svc-hg9pw [1.607519827s] Feb 20 13:51:31.921: INFO: Created: latency-svc-tbx8w Feb 20 13:51:31.932: INFO: Got endpoints: latency-svc-tbx8w [1.841539538s] Feb 20 13:51:31.993: INFO: Created: latency-svc-hvhsl Feb 20 13:51:32.000: INFO: Got endpoints: latency-svc-hvhsl [1.79095208s] Feb 20 13:51:32.110: INFO: Created: latency-svc-rp28p Feb 20 13:51:32.152: INFO: Got endpoints: latency-svc-rp28p [1.864127875s] Feb 20 13:51:32.160: INFO: Created: latency-svc-nch6l Feb 20 13:51:32.164: INFO: Got endpoints: latency-svc-nch6l [1.724362572s] Feb 20 13:51:32.289: INFO: Created: latency-svc-jqkrq Feb 20 13:51:32.296: INFO: Got endpoints: latency-svc-jqkrq [1.611075869s] Feb 20 13:51:32.392: INFO: Created: latency-svc-kbv77 Feb 20 13:51:32.478: INFO: Got endpoints: latency-svc-kbv77 [1.78478199s] Feb 20 13:51:32.491: INFO: Created: latency-svc-s9hcg Feb 20 13:51:32.504: INFO: Got endpoints: latency-svc-s9hcg [1.658958832s] Feb 20 13:51:32.583: INFO: Created: latency-svc-8x8nw Feb 20 13:51:32.717: INFO: Got endpoints: latency-svc-8x8nw [1.748943155s] Feb 20 13:51:32.766: INFO: Created: latency-svc-9pkmt Feb 20 13:51:32.767: INFO: Got endpoints: latency-svc-9pkmt [1.742220998s] Feb 20 13:51:32.911: INFO: Created: latency-svc-444xd Feb 20 13:51:32.923: INFO: Got endpoints: latency-svc-444xd [1.774623597s] Feb 20 13:51:32.974: INFO: Created: latency-svc-bgrr9 Feb 20 13:51:32.979: INFO: Got endpoints: latency-svc-bgrr9 [1.78527673s] Feb 20 13:51:33.108: INFO: Created: latency-svc-dgvx8 Feb 20 13:51:33.110: INFO: Got endpoints: latency-svc-dgvx8 [1.79886658s] Feb 20 13:51:33.194: INFO: Created: latency-svc-4bnrn Feb 20 13:51:33.302: INFO: Got endpoints: latency-svc-4bnrn [1.847129233s] Feb 20 13:51:33.382: INFO: Created: latency-svc-c4jkl Feb 20 13:51:33.383: INFO: Got endpoints: latency-svc-c4jkl [1.88564483s] Feb 20 13:51:33.513: INFO: Created: latency-svc-725lb Feb 20 13:51:33.530: INFO: Got endpoints: latency-svc-725lb [1.843076688s] Feb 20 13:51:33.594: INFO: Created: latency-svc-2h4w9 Feb 20 13:51:33.609: INFO: Got endpoints: latency-svc-2h4w9 [1.676132644s] Feb 20 13:51:33.718: INFO: Created: latency-svc-qp7vk Feb 20 13:51:33.718: INFO: Got endpoints: latency-svc-qp7vk [1.717331326s] Feb 20 13:51:33.880: INFO: Created: latency-svc-lsd65 Feb 20 13:51:33.885: INFO: Got endpoints: latency-svc-lsd65 [1.73254121s] Feb 20 13:51:33.945: INFO: Created: latency-svc-pj2rz Feb 20 13:51:33.954: INFO: Got endpoints: latency-svc-pj2rz [1.789792374s] Feb 20 13:51:34.096: INFO: Created: latency-svc-wzrp5 Feb 20 13:51:34.144: INFO: Got endpoints: latency-svc-wzrp5 [1.847521883s] Feb 20 13:51:34.161: INFO: Created: latency-svc-sflf5 Feb 20 13:51:34.302: INFO: Got endpoints: latency-svc-sflf5 [1.824149229s] Feb 20 13:51:34.334: INFO: Created: latency-svc-svm7g Feb 20 13:51:34.340: INFO: Got endpoints: latency-svc-svm7g [1.835801054s] Feb 20 13:51:34.498: INFO: Created: latency-svc-9ht5r Feb 20 13:51:34.568: INFO: Got endpoints: latency-svc-9ht5r [1.851000572s] Feb 20 13:51:34.569: INFO: Created: latency-svc-99pwp Feb 20 13:51:34.582: INFO: Got endpoints: latency-svc-99pwp [1.814763565s] Feb 20 13:51:34.815: INFO: Created: latency-svc-z9cx5 Feb 20 13:51:34.867: INFO: Got endpoints: latency-svc-z9cx5 [1.943840413s] Feb 20 13:51:34.886: INFO: Created: latency-svc-t4qnz Feb 20 13:51:34.985: INFO: Got endpoints: latency-svc-t4qnz [2.005468422s] Feb 20 13:51:35.065: INFO: Created: latency-svc-kj26s Feb 20 13:51:35.071: INFO: Got endpoints: latency-svc-kj26s [1.961109213s] Feb 20 13:51:35.175: INFO: Created: latency-svc-btjvj Feb 20 13:51:35.188: INFO: Got endpoints: latency-svc-btjvj [1.8856172s] Feb 20 13:51:35.235: INFO: Created: latency-svc-qlkgp Feb 20 13:51:35.248: INFO: Got endpoints: latency-svc-qlkgp [1.864849285s] Feb 20 13:51:35.361: INFO: Created: latency-svc-wb46g Feb 20 13:51:35.386: INFO: Got endpoints: latency-svc-wb46g [1.856239643s] Feb 20 13:51:35.526: INFO: Created: latency-svc-47s8z Feb 20 13:51:35.550: INFO: Got endpoints: latency-svc-47s8z [1.940676455s] Feb 20 13:51:35.626: INFO: Created: latency-svc-qx82d Feb 20 13:51:35.843: INFO: Got endpoints: latency-svc-qx82d [2.124802167s] Feb 20 13:51:35.898: INFO: Created: latency-svc-5knm6 Feb 20 13:51:35.919: INFO: Got endpoints: latency-svc-5knm6 [2.034242704s] Feb 20 13:51:36.206: INFO: Created: latency-svc-8njz8 Feb 20 13:51:36.221: INFO: Got endpoints: latency-svc-8njz8 [2.266744034s] Feb 20 13:51:36.276: INFO: Created: latency-svc-q27sl Feb 20 13:51:36.415: INFO: Got endpoints: latency-svc-q27sl [2.269894856s] Feb 20 13:51:36.437: INFO: Created: latency-svc-9jlw7 Feb 20 13:51:36.456: INFO: Got endpoints: latency-svc-9jlw7 [2.152698121s] Feb 20 13:51:36.511: INFO: Created: latency-svc-s6gtr Feb 20 13:51:36.630: INFO: Got endpoints: latency-svc-s6gtr [2.290041485s] Feb 20 13:51:36.636: INFO: Created: latency-svc-xtgsd Feb 20 13:51:36.648: INFO: Got endpoints: latency-svc-xtgsd [2.08028286s] Feb 20 13:51:36.911: INFO: Created: latency-svc-m9pgg Feb 20 13:51:36.911: INFO: Got endpoints: latency-svc-m9pgg [2.329125296s] Feb 20 13:51:37.083: INFO: Created: latency-svc-25942 Feb 20 13:51:37.102: INFO: Got endpoints: latency-svc-25942 [2.234761362s] Feb 20 13:51:37.156: INFO: Created: latency-svc-8dhg6 Feb 20 13:51:37.175: INFO: Got endpoints: latency-svc-8dhg6 [2.189855506s] Feb 20 13:51:37.317: INFO: Created: latency-svc-knffg Feb 20 13:51:37.339: INFO: Got endpoints: latency-svc-knffg [2.268218537s] Feb 20 13:51:37.506: INFO: Created: latency-svc-k58wf Feb 20 13:51:37.516: INFO: Got endpoints: latency-svc-k58wf [2.328332022s] Feb 20 13:51:37.574: INFO: Created: latency-svc-bp2m6 Feb 20 13:51:37.594: INFO: Got endpoints: latency-svc-bp2m6 [2.346055897s] Feb 20 13:51:37.750: INFO: Created: latency-svc-jphfg Feb 20 13:51:37.756: INFO: Got endpoints: latency-svc-jphfg [2.370027319s] Feb 20 13:51:37.819: INFO: Created: latency-svc-jgvmk Feb 20 13:51:37.826: INFO: Got endpoints: latency-svc-jgvmk [2.275847068s] Feb 20 13:51:37.920: INFO: Created: latency-svc-8z67z Feb 20 13:51:37.925: INFO: Got endpoints: latency-svc-8z67z [2.082241886s] Feb 20 13:51:37.989: INFO: Created: latency-svc-wzf7f Feb 20 13:51:38.062: INFO: Got endpoints: latency-svc-wzf7f [2.142532644s] Feb 20 13:51:38.081: INFO: Created: latency-svc-ns7k8 Feb 20 13:51:38.091: INFO: Got endpoints: latency-svc-ns7k8 [1.870239804s] Feb 20 13:51:38.131: INFO: Created: latency-svc-krckm Feb 20 13:51:38.147: INFO: Got endpoints: latency-svc-krckm [1.731847228s] Feb 20 13:51:38.264: INFO: Created: latency-svc-vncvw Feb 20 13:51:38.265: INFO: Got endpoints: latency-svc-vncvw [1.809529439s] Feb 20 13:51:38.331: INFO: Created: latency-svc-8248h Feb 20 13:51:38.347: INFO: Got endpoints: latency-svc-8248h [1.716670318s] Feb 20 13:51:38.554: INFO: Created: latency-svc-zb2bp Feb 20 13:51:38.556: INFO: Got endpoints: latency-svc-zb2bp [1.907890773s] Feb 20 13:51:38.621: INFO: Created: latency-svc-ngcpn Feb 20 13:51:38.630: INFO: Got endpoints: latency-svc-ngcpn [1.718358501s] Feb 20 13:51:38.746: INFO: Created: latency-svc-7fpfk Feb 20 13:51:38.763: INFO: Got endpoints: latency-svc-7fpfk [1.660513278s] Feb 20 13:51:38.810: INFO: Created: latency-svc-h5frs Feb 20 13:51:38.928: INFO: Got endpoints: latency-svc-h5frs [1.752728049s] Feb 20 13:51:38.933: INFO: Created: latency-svc-g29ld Feb 20 13:51:38.941: INFO: Got endpoints: latency-svc-g29ld [1.601703983s] Feb 20 13:51:39.184: INFO: Created: latency-svc-9qcb5 Feb 20 13:51:39.184: INFO: Got endpoints: latency-svc-9qcb5 [1.667221423s] Feb 20 13:51:39.391: INFO: Created: latency-svc-fzgnk Feb 20 13:51:39.397: INFO: Got endpoints: latency-svc-fzgnk [1.802219469s] Feb 20 13:51:39.577: INFO: Created: latency-svc-4b9bk Feb 20 13:51:39.764: INFO: Created: latency-svc-q8sq5 Feb 20 13:51:39.779: INFO: Got endpoints: latency-svc-4b9bk [2.022683571s] Feb 20 13:51:39.946: INFO: Created: latency-svc-fblnz Feb 20 13:51:39.973: INFO: Got endpoints: latency-svc-q8sq5 [2.146887442s] Feb 20 13:51:40.111: INFO: Got endpoints: latency-svc-fblnz [2.185433548s] Feb 20 13:51:40.113: INFO: Created: latency-svc-g9xk6 Feb 20 13:51:40.122: INFO: Got endpoints: latency-svc-g9xk6 [2.060290693s] Feb 20 13:51:40.363: INFO: Created: latency-svc-wl6zd Feb 20 13:51:40.367: INFO: Got endpoints: latency-svc-wl6zd [2.276433563s] Feb 20 13:51:40.458: INFO: Created: latency-svc-kkqkx Feb 20 13:51:40.502: INFO: Got endpoints: latency-svc-kkqkx [2.355266476s] Feb 20 13:51:40.549: INFO: Created: latency-svc-bsddn Feb 20 13:51:40.586: INFO: Got endpoints: latency-svc-bsddn [2.320460226s] Feb 20 13:51:40.591: INFO: Created: latency-svc-q9lph Feb 20 13:51:40.643: INFO: Got endpoints: latency-svc-q9lph [2.296170704s] Feb 20 13:51:40.685: INFO: Created: latency-svc-m8mv9 Feb 20 13:51:40.689: INFO: Got endpoints: latency-svc-m8mv9 [2.132511077s] Feb 20 13:51:40.854: INFO: Created: latency-svc-b5csp Feb 20 13:51:40.901: INFO: Got endpoints: latency-svc-b5csp [2.270839388s] Feb 20 13:51:40.950: INFO: Created: latency-svc-4gthd Feb 20 13:51:41.010: INFO: Got endpoints: latency-svc-4gthd [2.246797925s] Feb 20 13:51:41.067: INFO: Created: latency-svc-p6j4h Feb 20 13:51:41.078: INFO: Got endpoints: latency-svc-p6j4h [2.149658979s] Feb 20 13:51:41.212: INFO: Created: latency-svc-ggrvk Feb 20 13:51:41.218: INFO: Got endpoints: latency-svc-ggrvk [2.276689114s] Feb 20 13:51:41.273: INFO: Created: latency-svc-x46qk Feb 20 13:51:41.404: INFO: Got endpoints: latency-svc-x46qk [2.220309086s] Feb 20 13:51:41.450: INFO: Created: latency-svc-c6tcv Feb 20 13:51:41.458: INFO: Got endpoints: latency-svc-c6tcv [2.061625746s] Feb 20 13:51:41.502: INFO: Created: latency-svc-2dww7 Feb 20 13:51:41.721: INFO: Got endpoints: latency-svc-2dww7 [1.941762426s] Feb 20 13:51:41.732: INFO: Created: latency-svc-bkjhv Feb 20 13:51:41.883: INFO: Got endpoints: latency-svc-bkjhv [1.910129314s] Feb 20 13:51:41.970: INFO: Created: latency-svc-b6rrn Feb 20 13:51:42.084: INFO: Created: latency-svc-7zk7n Feb 20 13:51:42.084: INFO: Got endpoints: latency-svc-b6rrn [1.973344044s] Feb 20 13:51:42.089: INFO: Got endpoints: latency-svc-7zk7n [1.966965688s] Feb 20 13:51:42.213: INFO: Created: latency-svc-hkkfg Feb 20 13:51:42.226: INFO: Got endpoints: latency-svc-hkkfg [1.85880566s] Feb 20 13:51:42.279: INFO: Created: latency-svc-nbhn2 Feb 20 13:51:42.395: INFO: Created: latency-svc-bt4xw Feb 20 13:51:42.395: INFO: Got endpoints: latency-svc-nbhn2 [1.892582071s] Feb 20 13:51:42.439: INFO: Got endpoints: latency-svc-bt4xw [1.852417309s] Feb 20 13:51:42.462: INFO: Created: latency-svc-ms7wz Feb 20 13:51:42.480: INFO: Got endpoints: latency-svc-ms7wz [1.836618217s] Feb 20 13:51:42.596: INFO: Created: latency-svc-567bs Feb 20 13:51:42.607: INFO: Got endpoints: latency-svc-567bs [1.917908472s] Feb 20 13:51:42.646: INFO: Created: latency-svc-kd6t9 Feb 20 13:51:42.738: INFO: Created: latency-svc-8m27k Feb 20 13:51:42.742: INFO: Got endpoints: latency-svc-kd6t9 [1.841451607s] Feb 20 13:51:42.885: INFO: Got endpoints: latency-svc-8m27k [1.874904821s] Feb 20 13:51:42.890: INFO: Created: latency-svc-629ss Feb 20 13:51:42.895: INFO: Got endpoints: latency-svc-629ss [1.817071934s] Feb 20 13:51:42.980: INFO: Created: latency-svc-s6bpt Feb 20 13:51:43.080: INFO: Got endpoints: latency-svc-s6bpt [1.86121942s] Feb 20 13:51:43.100: INFO: Created: latency-svc-6vztg Feb 20 13:51:43.126: INFO: Got endpoints: latency-svc-6vztg [1.7213048s] Feb 20 13:51:43.181: INFO: Created: latency-svc-f7qnp Feb 20 13:51:43.242: INFO: Got endpoints: latency-svc-f7qnp [1.783275566s] Feb 20 13:51:43.271: INFO: Created: latency-svc-7fcxt Feb 20 13:51:43.285: INFO: Got endpoints: latency-svc-7fcxt [1.56440712s] Feb 20 13:51:43.347: INFO: Created: latency-svc-jszhd Feb 20 13:51:43.443: INFO: Got endpoints: latency-svc-jszhd [1.559722711s] Feb 20 13:51:43.443: INFO: Created: latency-svc-6vrg4 Feb 20 13:51:43.453: INFO: Got endpoints: latency-svc-6vrg4 [1.368424856s] Feb 20 13:51:43.504: INFO: Created: latency-svc-5lfxw Feb 20 13:51:43.510: INFO: Got endpoints: latency-svc-5lfxw [1.420997515s] Feb 20 13:51:43.609: INFO: Created: latency-svc-4mwrx Feb 20 13:51:43.639: INFO: Got endpoints: latency-svc-4mwrx [1.412378514s] Feb 20 13:51:43.643: INFO: Created: latency-svc-nx777 Feb 20 13:51:43.663: INFO: Got endpoints: latency-svc-nx777 [1.267924372s] Feb 20 13:51:43.767: INFO: Created: latency-svc-x55jg Feb 20 13:51:43.799: INFO: Got endpoints: latency-svc-x55jg [1.359767425s] Feb 20 13:51:43.939: INFO: Created: latency-svc-h2vs9 Feb 20 13:51:43.939: INFO: Got endpoints: latency-svc-h2vs9 [1.458763736s] Feb 20 13:51:43.988: INFO: Created: latency-svc-5rj42 Feb 20 13:51:44.099: INFO: Got endpoints: latency-svc-5rj42 [1.492144241s] Feb 20 13:51:44.122: INFO: Created: latency-svc-4cl97 Feb 20 13:51:44.124: INFO: Got endpoints: latency-svc-4cl97 [1.381278702s] Feb 20 13:51:44.182: INFO: Created: latency-svc-x5ptr Feb 20 13:51:44.182: INFO: Got endpoints: latency-svc-x5ptr [1.296000372s] Feb 20 13:51:44.288: INFO: Created: latency-svc-m76k8 Feb 20 13:51:44.296: INFO: Got endpoints: latency-svc-m76k8 [1.401381964s] Feb 20 13:51:44.499: INFO: Created: latency-svc-bkhc5 Feb 20 13:51:44.549: INFO: Created: latency-svc-8bpcs Feb 20 13:51:44.551: INFO: Got endpoints: latency-svc-bkhc5 [1.470730142s] Feb 20 13:51:44.557: INFO: Got endpoints: latency-svc-8bpcs [1.431718558s] Feb 20 13:51:44.652: INFO: Created: latency-svc-74lp5 Feb 20 13:51:44.659: INFO: Got endpoints: latency-svc-74lp5 [1.417492592s] Feb 20 13:51:44.710: INFO: Created: latency-svc-qk8vq Feb 20 13:51:44.712: INFO: Got endpoints: latency-svc-qk8vq [1.426607443s] Feb 20 13:51:44.911: INFO: Created: latency-svc-m4gtj Feb 20 13:51:44.959: INFO: Got endpoints: latency-svc-m4gtj [1.515827014s] Feb 20 13:51:45.167: INFO: Created: latency-svc-wcn48 Feb 20 13:51:45.265: INFO: Got endpoints: latency-svc-wcn48 [1.812520871s] Feb 20 13:51:45.278: INFO: Created: latency-svc-fx6jv Feb 20 13:51:45.284: INFO: Got endpoints: latency-svc-fx6jv [1.773597113s] Feb 20 13:51:45.340: INFO: Created: latency-svc-hqb5s Feb 20 13:51:45.358: INFO: Got endpoints: latency-svc-hqb5s [1.71925472s] Feb 20 13:51:45.465: INFO: Created: latency-svc-pqxfr Feb 20 13:51:45.466: INFO: Got endpoints: latency-svc-pqxfr [1.802269882s] Feb 20 13:51:45.505: INFO: Created: latency-svc-kdc6p Feb 20 13:51:45.511: INFO: Got endpoints: latency-svc-kdc6p [1.711589017s] Feb 20 13:51:45.595: INFO: Created: latency-svc-fgws5 Feb 20 13:51:45.603: INFO: Got endpoints: latency-svc-fgws5 [1.663781159s] Feb 20 13:51:45.603: INFO: Latencies: [122.588445ms 141.573667ms 202.515095ms 247.79304ms 390.402055ms 472.520388ms 515.613759ms 743.304132ms 828.927593ms 830.771896ms 887.41991ms 994.676359ms 1.168162723s 1.207527586s 1.267924372s 1.296000372s 1.359767425s 1.368424856s 1.37038089s 1.381278702s 1.395696019s 1.397881251s 1.401381964s 1.412378514s 1.417492592s 1.420997515s 1.426607443s 1.431718558s 1.458763736s 1.470730142s 1.475645392s 1.492144241s 1.515827014s 1.521059223s 1.521459995s 1.559722711s 1.56440712s 1.573099316s 1.577013751s 1.580436746s 1.591309599s 1.59259124s 1.595677205s 1.601475536s 1.601703983s 1.607519827s 1.611075869s 1.631162497s 1.645516867s 1.656270347s 1.658958832s 1.660513278s 1.663781159s 1.665133573s 1.667221423s 1.669275746s 1.670135126s 1.675505737s 1.676132644s 1.702344918s 1.704231837s 1.705145017s 1.708517423s 1.711589017s 1.713560248s 1.714250569s 1.716432603s 1.716670318s 1.717331326s 1.718358501s 1.71925472s 1.7213048s 1.724362572s 1.730774836s 1.731847228s 1.73254121s 1.734842316s 1.742220998s 1.74281343s 1.748943155s 1.752728049s 1.755128949s 1.762534806s 1.765779712s 1.773597113s 1.774623597s 1.783275566s 1.784272562s 1.784648306s 1.78478199s 1.78527673s 1.789233593s 1.789792374s 1.79095208s 1.792336578s 1.797767206s 1.79886658s 1.802219469s 1.802269882s 1.808576654s 1.809529439s 1.812520871s 1.813367999s 1.813891901s 1.814763565s 1.817071934s 1.817615694s 1.824149229s 1.825696663s 1.835801054s 1.835876728s 1.836618217s 1.841451607s 1.841539538s 1.841540219s 1.843076688s 1.847129233s 1.847521883s 1.851000572s 1.852417309s 1.856239643s 1.856616111s 1.85880566s 1.86121942s 1.864127875s 1.864684598s 1.864849285s 1.870239804s 1.872889026s 1.874904821s 1.8856172s 1.88564483s 1.88771555s 1.892582071s 1.894097523s 1.8974075s 1.902910315s 1.907890773s 1.910129314s 1.917908472s 1.92083655s 1.932529288s 1.933415935s 1.940676455s 1.941762426s 1.943840413s 1.961109213s 1.966965688s 1.973344044s 2.005468422s 2.009474943s 2.022683571s 2.034242704s 2.060290693s 2.061625746s 2.08028286s 2.082241886s 2.106306408s 2.124802167s 2.127910839s 2.132511077s 2.142532644s 2.146887442s 2.149658979s 2.150765597s 2.152698121s 2.185433548s 2.189855506s 2.215513031s 2.220309086s 2.234761362s 2.246797925s 2.266744034s 2.268218537s 2.269894856s 2.270839388s 2.275847068s 2.276433563s 2.276689114s 2.290041485s 2.296170704s 2.317090993s 2.320460226s 2.328332022s 2.329125296s 2.346055897s 2.355266476s 2.370027319s 2.411373174s 2.416721632s 2.438448044s 2.469452761s 2.489162782s 2.54405688s 2.561093984s 2.598747544s 2.605168184s 2.617392644s 2.636622791s 2.693933538s] Feb 20 13:51:45.604: INFO: 50 %ile: 1.809529439s Feb 20 13:51:45.604: INFO: 90 %ile: 2.296170704s Feb 20 13:51:45.604: INFO: 99 %ile: 2.636622791s Feb 20 13:51:45.604: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:51:45.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5071" for this suite. Feb 20 13:52:21.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:52:21.763: INFO: namespace svc-latency-5071 deletion completed in 36.145381626s • [SLOW TEST:69.051 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:52:21.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9136 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9136 to expose endpoints map[] Feb 20 13:52:21.924: INFO: Get endpoints failed (3.655682ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 20 13:52:22.938: INFO: successfully validated that service multi-endpoint-test in namespace services-9136 exposes endpoints map[] (1.017927705s elapsed) STEP: Creating pod pod1 in namespace services-9136 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9136 to expose endpoints map[pod1:[100]] Feb 20 13:52:27.132: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.176684433s elapsed, will retry) Feb 20 13:52:31.205: INFO: successfully validated that service multi-endpoint-test in namespace services-9136 exposes endpoints map[pod1:[100]] (8.248971512s elapsed) STEP: Creating pod pod2 in namespace services-9136 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9136 to expose endpoints map[pod1:[100] pod2:[101]] Feb 20 13:52:36.435: INFO: Unexpected endpoints: found map[7dc36a83-1c97-4b17-bd73-f35343cc6173:[100]], expected map[pod1:[100] pod2:[101]] (5.223924699s elapsed, will retry) Feb 20 13:52:38.533: INFO: successfully validated that service multi-endpoint-test in namespace services-9136 exposes endpoints map[pod1:[100] pod2:[101]] (7.321735567s elapsed) STEP: Deleting pod pod1 in namespace services-9136 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9136 to expose endpoints map[pod2:[101]] Feb 20 13:52:39.683: INFO: successfully validated that service multi-endpoint-test in namespace services-9136 exposes endpoints map[pod2:[101]] (1.143361206s elapsed) STEP: Deleting pod pod2 in namespace services-9136 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9136 to expose endpoints map[] Feb 20 13:52:41.955: INFO: successfully validated that service multi-endpoint-test in namespace services-9136 exposes endpoints map[] (2.226496069s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:52:42.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9136" for this suite. Feb 20 13:52:48.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:52:48.632: INFO: namespace services-9136 deletion completed in 6.158494701s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:26.868 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:52:48.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 13:52:48.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4729' Feb 20 13:52:51.044: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 13:52:51.044: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 20 13:52:51.097: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 20 13:52:51.097: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 20 13:52:51.145: INFO: scanned /root for discovery docs: Feb 20 13:52:51.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4729' Feb 20 13:53:13.379: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 20 13:53:13.379: INFO: stdout: "Created e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1\nScaling up e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 20 13:53:13.379: INFO: stdout: "Created e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1\nScaling up e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 20 13:53:13.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4729' Feb 20 13:53:13.526: INFO: stderr: "" Feb 20 13:53:13.526: INFO: stdout: "e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1-zwzwc " Feb 20 13:53:13.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1-zwzwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4729' Feb 20 13:53:13.672: INFO: stderr: "" Feb 20 13:53:13.672: INFO: stdout: "true" Feb 20 13:53:13.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1-zwzwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4729' Feb 20 13:53:13.790: INFO: stderr: "" Feb 20 13:53:13.790: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 20 13:53:13.790: INFO: e2e-test-nginx-rc-7e6877bca616808fa9b1e91aca20e8e1-zwzwc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 20 13:53:13.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4729' Feb 20 13:53:13.912: INFO: stderr: "" Feb 20 13:53:13.912: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:53:13.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4729" for this suite. Feb 20 13:53:35.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:53:36.072: INFO: namespace kubectl-4729 deletion completed in 22.149721816s • [SLOW TEST:47.440 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:53:36.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 20 13:53:46.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-3f1d24dd-83c8-47f6-ad14-b16ad4acee4b -c busybox-main-container --namespace=emptydir-3032 -- cat /usr/share/volumeshare/shareddata.txt' Feb 20 13:53:46.958: INFO: stderr: "I0220 13:53:46.487634 1236 log.go:172] (0xc000966370) (0xc0003228c0) Create stream\nI0220 13:53:46.487872 1236 log.go:172] (0xc000966370) (0xc0003228c0) Stream added, broadcasting: 1\nI0220 13:53:46.498296 1236 log.go:172] (0xc000966370) Reply frame received for 1\nI0220 13:53:46.498371 1236 log.go:172] (0xc000966370) (0xc000804000) Create stream\nI0220 13:53:46.498391 1236 log.go:172] (0xc000966370) (0xc000804000) Stream added, broadcasting: 3\nI0220 13:53:46.503073 1236 log.go:172] (0xc000966370) Reply frame received for 3\nI0220 13:53:46.503388 1236 log.go:172] (0xc000966370) (0xc0008ac000) Create stream\nI0220 13:53:46.503530 1236 log.go:172] (0xc000966370) (0xc0008ac000) Stream added, broadcasting: 5\nI0220 13:53:46.507269 1236 log.go:172] (0xc000966370) Reply frame received for 5\nI0220 13:53:46.762794 1236 log.go:172] (0xc000966370) Data frame received for 3\nI0220 13:53:46.762835 1236 log.go:172] (0xc000804000) (3) Data frame handling\nI0220 13:53:46.762851 1236 log.go:172] (0xc000804000) (3) Data frame sent\nI0220 13:53:46.949250 1236 log.go:172] (0xc000966370) (0xc000804000) Stream removed, broadcasting: 3\nI0220 13:53:46.949432 1236 log.go:172] (0xc000966370) Data frame received for 1\nI0220 13:53:46.949439 1236 log.go:172] (0xc0003228c0) (1) Data frame handling\nI0220 13:53:46.949456 1236 log.go:172] (0xc0003228c0) (1) Data frame sent\nI0220 13:53:46.949494 1236 log.go:172] (0xc000966370) (0xc0003228c0) Stream removed, broadcasting: 1\nI0220 13:53:46.949639 1236 log.go:172] (0xc000966370) (0xc0008ac000) Stream removed, broadcasting: 5\nI0220 13:53:46.949805 1236 log.go:172] (0xc000966370) Go away received\nI0220 13:53:46.950325 1236 log.go:172] (0xc000966370) (0xc0003228c0) Stream removed, broadcasting: 1\nI0220 13:53:46.950353 1236 log.go:172] (0xc000966370) (0xc000804000) Stream removed, broadcasting: 3\nI0220 13:53:46.950357 1236 log.go:172] (0xc000966370) (0xc0008ac000) Stream removed, broadcasting: 5\n" Feb 20 13:53:46.958: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:53:46.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3032" for this suite. Feb 20 13:53:52.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:53:53.102: INFO: namespace emptydir-3032 deletion completed in 6.134207615s • [SLOW TEST:17.029 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:53:53.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:53:53.202: INFO: Creating ReplicaSet my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45 Feb 20 13:53:53.214: INFO: Pod name my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45: Found 0 pods out of 1 Feb 20 13:53:58.243: INFO: Pod name my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45: Found 1 pods out of 1 Feb 20 13:53:58.243: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45" is running Feb 20 13:54:04.257: INFO: Pod "my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45-gbwvs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 13:53:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 13:53:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 13:53:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 13:53:53 +0000 UTC Reason: Message:}]) Feb 20 13:54:04.258: INFO: Trying to dial the pod Feb 20 13:54:09.302: INFO: Controller my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45: Got expected result from replica 1 [my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45-gbwvs]: "my-hostname-basic-a44a19d0-49d9-4b54-8474-f8c945a71f45-gbwvs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 13:54:09.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6624" for this suite. Feb 20 13:54:15.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:54:15.588: INFO: namespace replicaset-6624 deletion completed in 6.274897163s • [SLOW TEST:22.485 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:54:15.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9234 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9234 STEP: Creating statefulset with conflicting port in namespace statefulset-9234 STEP: Waiting until pod test-pod will start running in namespace statefulset-9234 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9234 Feb 20 13:54:25.942: INFO: Observed stateful pod in namespace: statefulset-9234, name: ss-0, uid: 6862dd53-511e-46e0-ad1b-42741b8392fb, status phase: Pending. Waiting for statefulset controller to delete. Feb 20 13:59:25.942: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 20 13:59:25.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-9234' Feb 20 13:59:26.103: INFO: stderr: "" Feb 20 13:59:26.103: INFO: stdout: "Name: ss-0\nNamespace: statefulset-9234\nPriority: 0\nNode: iruya-node/\nLabels: baz=blah\n controller-revision-hash=ss-6f98bdb9c4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q67vw (ro)\nVolumes:\n default-token-q67vw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q67vw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m8s kubelet, iruya-node Predicate PodFitsHostPorts failed\n" Feb 20 13:59:26.103: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-9234 Priority: 0 Node: iruya-node/ Labels: baz=blah controller-revision-hash=ss-6f98bdb9c4 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-q67vw (ro) Volumes: default-token-q67vw: Type: Secret (a volume populated by a Secret) SecretName: default-token-q67vw Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m8s kubelet, iruya-node Predicate PodFitsHostPorts failed Feb 20 13:59:26.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-9234 --tail=100' Feb 20 13:59:26.257: INFO: rc: 1 Feb 20 13:59:26.257: INFO: Last 100 log lines of ss-0: Feb 20 13:59:26.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-9234' Feb 20 13:59:26.355: INFO: stderr: "" Feb 20 13:59:26.356: INFO: stdout: "Name: test-pod\nNamespace: statefulset-9234\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 20 Feb 2020 13:54:16 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nContainers:\n nginx:\n Container ID: docker://92710c84871b0547ca30dc3932bcdd36bdf35800a76870c185228382036083f1\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Thu, 20 Feb 2020 13:54:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q67vw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-q67vw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q67vw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m5s kubelet, iruya-node Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m3s kubelet, iruya-node Created container nginx\n Normal Started 5m2s kubelet, iruya-node Started container nginx\n" Feb 20 13:59:26.356: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-9234 Priority: 0 Node: iruya-node/10.96.3.65 Start Time: Thu, 20 Feb 2020 13:54:16 +0000 Labels: Annotations: Status: Running IP: 10.44.0.1 Containers: nginx: Container ID: docker://92710c84871b0547ca30dc3932bcdd36bdf35800a76870c185228382036083f1 Image: docker.io/library/nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Thu, 20 Feb 2020 13:54:24 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-q67vw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-q67vw: Type: Secret (a volume populated by a Secret) SecretName: default-token-q67vw Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m5s kubelet, iruya-node Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m3s kubelet, iruya-node Created container nginx Normal Started 5m2s kubelet, iruya-node Started container nginx Feb 20 13:59:26.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-9234 --tail=100' Feb 20 13:59:26.452: INFO: stderr: "" Feb 20 13:59:26.452: INFO: stdout: "" Feb 20 13:59:26.452: INFO: Last 100 log lines of test-pod: Feb 20 13:59:26.452: INFO: Deleting all statefulset in ns statefulset-9234 Feb 20 13:59:26.457: INFO: Scaling statefulset ss to 0 Feb 20 13:59:36.503: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 13:59:36.508: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-9234". STEP: Found 10 events. Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:15 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:17 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-9234/ss is recreating failed Pod ss-0 Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:17 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:17 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:18 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:18 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:18 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:21 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:23 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx Feb 20 13:59:36.544: INFO: At 2020-02-20 13:54:24 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx Feb 20 13:59:36.550: INFO: POD NODE PHASE GRACE CONDITIONS Feb 20 13:59:36.550: INFO: test-pod iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:54:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:54:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:54:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:54:16 +0000 UTC }] Feb 20 13:59:36.550: INFO: Feb 20 13:59:36.598: INFO: Logging node info for node iruya-node Feb 20 13:59:36.601: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:25081235,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-20 13:58:54 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-20 13:58:54 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-20 13:58:54 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-20 13:58:54 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[aquasec/kube-bench@sha256:33d50ec2fdc6644ffa70b088af1a9932f16d6bb9391a9f73045c8c6b4f73f4e4 aquasec/kube-bench:latest] 21536876} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 20 13:59:36.602: INFO: Logging kubelet events for node iruya-node Feb 20 13:59:36.607: INFO: Logging pods the kubelet thinks is on node iruya-node Feb 20 13:59:36.628: INFO: kube-bench-j7kcs started at 2020-02-11 06:42:30 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.628: INFO: Container kube-bench ready: false, restart count 0 Feb 20 13:59:36.628: INFO: test-pod started at 2020-02-20 13:54:16 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.628: INFO: Container nginx ready: true, restart count 0 Feb 20 13:59:36.628: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.628: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 13:59:36.628: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded) Feb 20 13:59:36.628: INFO: Container weave ready: true, restart count 0 Feb 20 13:59:36.628: INFO: Container weave-npc ready: true, restart count 0 W0220 13:59:36.633024 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 13:59:36.689: INFO: Latency metrics for node iruya-node Feb 20 13:59:36.689: INFO: Logging node info for node iruya-server-sfge57q7djm7 Feb 20 13:59:36.694: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:25081255,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-20 13:59:10 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-20 13:59:10 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-20 13:59:10 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-20 13:59:10 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 20 13:59:36.695: INFO: Logging kubelet events for node iruya-server-sfge57q7djm7 Feb 20 13:59:36.701: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 Feb 20 13:59:36.717: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 13:59:36.717: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 20 13:59:36.717: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container kube-apiserver ready: true, restart count 0 Feb 20 13:59:36.717: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container coredns ready: true, restart count 0 Feb 20 13:59:36.717: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container kube-scheduler ready: true, restart count 15 Feb 20 13:59:36.717: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded) Feb 20 13:59:36.717: INFO: Container weave ready: true, restart count 0 Feb 20 13:59:36.717: INFO: Container weave-npc ready: true, restart count 0 Feb 20 13:59:36.717: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container coredns ready: true, restart count 0 Feb 20 13:59:36.717: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded) Feb 20 13:59:36.717: INFO: Container etcd ready: true, restart count 0 W0220 13:59:36.730612 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 13:59:36.789: INFO: Latency metrics for node iruya-server-sfge57q7djm7 Feb 20 13:59:36.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9234" for this suite. Feb 20 13:59:58.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 13:59:59.290: INFO: namespace statefulset-9234 deletion completed in 22.493472742s • Failure [343.702 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 13:59:25.942: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 13:59:59.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cbbd3fb3-1df2-4209-b50b-0fafb342357c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cbbd3fb3-1df2-4209-b50b-0fafb342357c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:01:23.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4729" for this suite. Feb 20 14:01:45.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:01:45.446: INFO: namespace projected-4729 deletion completed in 22.100981289s • [SLOW TEST:106.155 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:01:45.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:01:45.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2475" for this suite. Feb 20 14:01:51.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:01:51.853: INFO: namespace kubelet-test-2475 deletion completed in 6.210401569s • [SLOW TEST:6.406 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:01:51.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-50de902b-9ccb-4a14-a7e7-b1016b0ae502 STEP: Creating a pod to test consume secrets Feb 20 14:01:52.065: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6" in namespace "projected-7296" to be "success or failure" Feb 20 14:01:52.163: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 98.375692ms Feb 20 14:01:54.171: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106642374s Feb 20 14:01:56.180: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115321036s Feb 20 14:01:58.213: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148838531s Feb 20 14:02:00.222: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15702995s Feb 20 14:02:02.227: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162317532s STEP: Saw pod success Feb 20 14:02:02.227: INFO: Pod "pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6" satisfied condition "success or failure" Feb 20 14:02:02.231: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6 container secret-volume-test: STEP: delete the pod Feb 20 14:02:02.297: INFO: Waiting for pod pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6 to disappear Feb 20 14:02:02.408: INFO: Pod pod-projected-secrets-fd910bc7-f7e9-47c0-9f68-5f844512c7e6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:02:02.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7296" for this suite. Feb 20 14:02:08.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:02:08.914: INFO: namespace projected-7296 deletion completed in 6.498152199s • [SLOW TEST:17.061 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:02:08.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-p7m8 STEP: Creating a pod to test atomic-volume-subpath Feb 20 14:02:09.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p7m8" in namespace "subpath-2564" to be "success or failure" Feb 20 14:02:09.085: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 53.304333ms Feb 20 14:02:11.114: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082232841s Feb 20 14:02:13.120: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088157395s Feb 20 14:02:15.130: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098271478s Feb 20 14:02:17.138: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 8.105412785s Feb 20 14:02:19.145: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 10.113328962s Feb 20 14:02:21.187: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 12.15445382s Feb 20 14:02:23.195: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 14.163035963s Feb 20 14:02:25.205: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 16.172825666s Feb 20 14:02:27.218: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 18.186140185s Feb 20 14:02:29.229: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 20.196393881s Feb 20 14:02:31.237: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 22.204744398s Feb 20 14:02:33.247: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 24.214609836s Feb 20 14:02:35.257: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 26.224941049s Feb 20 14:02:37.266: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Running", Reason="", readiness=true. Elapsed: 28.233822508s Feb 20 14:02:39.277: INFO: Pod "pod-subpath-test-configmap-p7m8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.245181967s STEP: Saw pod success Feb 20 14:02:39.277: INFO: Pod "pod-subpath-test-configmap-p7m8" satisfied condition "success or failure" Feb 20 14:02:39.281: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-p7m8 container test-container-subpath-configmap-p7m8: STEP: delete the pod Feb 20 14:02:39.360: INFO: Waiting for pod pod-subpath-test-configmap-p7m8 to disappear Feb 20 14:02:39.395: INFO: Pod pod-subpath-test-configmap-p7m8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-p7m8 Feb 20 14:02:39.395: INFO: Deleting pod "pod-subpath-test-configmap-p7m8" in namespace "subpath-2564" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:02:39.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2564" for this suite. Feb 20 14:02:45.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:02:45.568: INFO: namespace subpath-2564 deletion completed in 6.16218239s • [SLOW TEST:36.653 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:02:45.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 20 14:02:45.707: INFO: Waiting up to 5m0s for pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2" in namespace "downward-api-9137" to be "success or failure" Feb 20 14:02:45.742: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.123225ms Feb 20 14:02:47.753: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04598939s Feb 20 14:02:49.757: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049851251s Feb 20 14:02:51.771: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064290739s Feb 20 14:02:53.787: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080635005s STEP: Saw pod success Feb 20 14:02:53.788: INFO: Pod "downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2" satisfied condition "success or failure" Feb 20 14:02:53.794: INFO: Trying to get logs from node iruya-node pod downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2 container dapi-container: STEP: delete the pod Feb 20 14:02:54.016: INFO: Waiting for pod downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2 to disappear Feb 20 14:02:54.029: INFO: Pod downward-api-b9c862dd-1238-4908-99e6-bccdd356ecc2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:02:54.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9137" for this suite. Feb 20 14:03:00.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:03:00.151: INFO: namespace downward-api-9137 deletion completed in 6.115210804s • [SLOW TEST:14.583 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:03:00.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-0e249c79-1850-4449-b6bc-f040b385d8c1 STEP: Creating a pod to test consume secrets Feb 20 14:03:00.279: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6" in namespace "projected-3165" to be "success or failure" Feb 20 14:03:00.297: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.963943ms Feb 20 14:03:02.305: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025977405s Feb 20 14:03:04.314: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035295905s Feb 20 14:03:06.323: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044142584s Feb 20 14:03:08.334: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055476468s STEP: Saw pod success Feb 20 14:03:08.335: INFO: Pod "pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6" satisfied condition "success or failure" Feb 20 14:03:08.339: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6 container projected-secret-volume-test: STEP: delete the pod Feb 20 14:03:08.420: INFO: Waiting for pod pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6 to disappear Feb 20 14:03:08.422: INFO: Pod pod-projected-secrets-8773dd46-7954-4d49-863e-b4ccf5c0e8b6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:03:08.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3165" for this suite. Feb 20 14:03:14.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:03:14.607: INFO: namespace projected-3165 deletion completed in 6.180345088s • [SLOW TEST:14.456 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:03:14.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 20 14:03:22.847: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:03:22.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5712" for this suite. Feb 20 14:03:28.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:03:29.072: INFO: namespace container-runtime-5712 deletion completed in 6.162605428s • [SLOW TEST:14.464 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:03:29.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:03:29.168: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:03:37.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2651" for this suite. Feb 20 14:04:19.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:04:19.838: INFO: namespace pods-2651 deletion completed in 42.205534633s • [SLOW TEST:50.766 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:04:19.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-d3f897ed-56b8-43e4-863b-961128afc33f STEP: Creating secret with name s-test-opt-upd-38a831ba-4f51-42e4-bcaf-915cdea71c2f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d3f897ed-56b8-43e4-863b-961128afc33f STEP: Updating secret s-test-opt-upd-38a831ba-4f51-42e4-bcaf-915cdea71c2f STEP: Creating secret with name s-test-opt-create-d0063336-86b7-4ad1-8e57-402a9c5206b5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:04:36.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6837" for this suite. Feb 20 14:05:00.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:05:00.515: INFO: namespace projected-6837 deletion completed in 24.153130467s • [SLOW TEST:40.677 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:05:00.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Feb 20 14:05:08.674: INFO: Pod pod-hostip-e341820b-13ac-4f8c-8460-cdab345579c4 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:05:08.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-968" for this suite. Feb 20 14:05:28.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:05:28.816: INFO: namespace pods-968 deletion completed in 20.135128925s • [SLOW TEST:28.300 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:05:28.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a71717db-f5ad-40a7-88cf-b292508f1463 STEP: Creating a pod to test consume configMaps Feb 20 14:05:28.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8" in namespace "configmap-2599" to be "success or failure" Feb 20 14:05:28.910: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.426909ms Feb 20 14:05:31.033: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137499717s Feb 20 14:05:33.048: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153217648s Feb 20 14:05:35.059: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163764579s Feb 20 14:05:37.069: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174025803s Feb 20 14:05:39.078: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.183217605s STEP: Saw pod success Feb 20 14:05:39.078: INFO: Pod "pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8" satisfied condition "success or failure" Feb 20 14:05:39.082: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8 container configmap-volume-test: STEP: delete the pod Feb 20 14:05:39.169: INFO: Waiting for pod pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8 to disappear Feb 20 14:05:39.176: INFO: Pod pod-configmaps-d1964715-2e07-4881-8717-585d8af41dd8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:05:39.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2599" for this suite. Feb 20 14:05:45.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:05:45.333: INFO: namespace configmap-2599 deletion completed in 6.151830048s • [SLOW TEST:16.517 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:05:45.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 14:05:45.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8319' Feb 20 14:05:48.075: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 14:05:48.075: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 20 14:05:48.086: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-k8sdm] Feb 20 14:05:48.086: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-k8sdm" in namespace "kubectl-8319" to be "running and ready" Feb 20 14:05:48.090: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355165ms Feb 20 14:05:50.096: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009868169s Feb 20 14:05:52.103: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017188782s Feb 20 14:05:54.117: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030726835s Feb 20 14:05:56.128: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041739056s Feb 20 14:05:58.138: INFO: Pod "e2e-test-nginx-rc-k8sdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.052012755s Feb 20 14:05:58.138: INFO: Pod "e2e-test-nginx-rc-k8sdm" satisfied condition "running and ready" Feb 20 14:05:58.138: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-k8sdm] Feb 20 14:05:58.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8319' Feb 20 14:05:58.340: INFO: stderr: "" Feb 20 14:05:58.340: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Feb 20 14:05:58.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8319' Feb 20 14:05:58.457: INFO: stderr: "" Feb 20 14:05:58.457: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:05:58.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8319" for this suite. Feb 20 14:06:20.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:06:20.592: INFO: namespace kubectl-8319 deletion completed in 22.12603001s • [SLOW TEST:35.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:06:20.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 14:06:20.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1174' Feb 20 14:06:20.900: INFO: stderr: "" Feb 20 14:06:20.900: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 20 14:06:20.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1174' Feb 20 14:06:26.117: INFO: stderr: "" Feb 20 14:06:26.118: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:06:26.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1174" for this suite. Feb 20 14:06:32.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:06:32.254: INFO: namespace kubectl-1174 deletion completed in 6.127588442s • [SLOW TEST:11.661 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:06:32.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 20 14:06:32.387: INFO: Waiting up to 5m0s for pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98" in namespace "emptydir-3740" to be "success or failure" Feb 20 14:06:32.405: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Pending", Reason="", readiness=false. Elapsed: 17.889545ms Feb 20 14:06:34.478: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090365909s Feb 20 14:06:36.522: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13505093s Feb 20 14:06:38.561: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173763835s Feb 20 14:06:40.575: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187992495s Feb 20 14:06:42.585: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197554375s STEP: Saw pod success Feb 20 14:06:42.585: INFO: Pod "pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98" satisfied condition "success or failure" Feb 20 14:06:42.588: INFO: Trying to get logs from node iruya-node pod pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98 container test-container: STEP: delete the pod Feb 20 14:06:42.662: INFO: Waiting for pod pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98 to disappear Feb 20 14:06:42.666: INFO: Pod pod-6b737a01-7735-4e2a-a1c7-609a49ec7e98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:06:42.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3740" for this suite. Feb 20 14:06:48.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:06:48.787: INFO: namespace emptydir-3740 deletion completed in 6.116822843s • [SLOW TEST:16.532 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:06:48.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6571 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6571 to expose endpoints map[] Feb 20 14:06:49.019: INFO: Get endpoints failed (35.415544ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 20 14:06:50.032: INFO: successfully validated that service endpoint-test2 in namespace services-6571 exposes endpoints map[] (1.048067166s elapsed) STEP: Creating pod pod1 in namespace services-6571 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6571 to expose endpoints map[pod1:[80]] Feb 20 14:06:54.168: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.116083388s elapsed, will retry) Feb 20 14:06:58.406: INFO: successfully validated that service endpoint-test2 in namespace services-6571 exposes endpoints map[pod1:[80]] (8.354691436s elapsed) STEP: Creating pod pod2 in namespace services-6571 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6571 to expose endpoints map[pod1:[80] pod2:[80]] Feb 20 14:07:03.196: INFO: Unexpected endpoints: found map[1be02882-85f4-418c-b00a-4981e02a98f8:[80]], expected map[pod1:[80] pod2:[80]] (4.777773368s elapsed, will retry) Feb 20 14:07:06.346: INFO: successfully validated that service endpoint-test2 in namespace services-6571 exposes endpoints map[pod1:[80] pod2:[80]] (7.927844015s elapsed) STEP: Deleting pod pod1 in namespace services-6571 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6571 to expose endpoints map[pod2:[80]] Feb 20 14:07:07.399: INFO: successfully validated that service endpoint-test2 in namespace services-6571 exposes endpoints map[pod2:[80]] (1.041464629s elapsed) STEP: Deleting pod pod2 in namespace services-6571 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6571 to expose endpoints map[] Feb 20 14:07:08.546: INFO: successfully validated that service endpoint-test2 in namespace services-6571 exposes endpoints map[] (1.135422736s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:07:09.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6571" for this suite. Feb 20 14:07:31.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:07:32.076: INFO: namespace services-6571 deletion completed in 22.237918705s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.289 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:07:32.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:07:32.174: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 20 14:07:32.238: INFO: Number of nodes with available pods: 0 Feb 20 14:07:32.238: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 20 14:07:32.309: INFO: Number of nodes with available pods: 0 Feb 20 14:07:32.310: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:33.318: INFO: Number of nodes with available pods: 0 Feb 20 14:07:33.318: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:34.321: INFO: Number of nodes with available pods: 0 Feb 20 14:07:34.321: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:35.325: INFO: Number of nodes with available pods: 0 Feb 20 14:07:35.325: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:36.337: INFO: Number of nodes with available pods: 0 Feb 20 14:07:36.337: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:37.316: INFO: Number of nodes with available pods: 0 Feb 20 14:07:37.316: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:38.329: INFO: Number of nodes with available pods: 0 Feb 20 14:07:38.329: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:39.318: INFO: Number of nodes with available pods: 0 Feb 20 14:07:39.318: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:40.321: INFO: Number of nodes with available pods: 0 Feb 20 14:07:40.321: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:41.325: INFO: Number of nodes with available pods: 1 Feb 20 14:07:41.326: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 20 14:07:41.387: INFO: Number of nodes with available pods: 1 Feb 20 14:07:41.387: INFO: Number of running nodes: 0, number of available pods: 1 Feb 20 14:07:42.399: INFO: Number of nodes with available pods: 0 Feb 20 14:07:42.399: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 20 14:07:42.503: INFO: Number of nodes with available pods: 0 Feb 20 14:07:42.503: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:43.513: INFO: Number of nodes with available pods: 0 Feb 20 14:07:43.514: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:44.526: INFO: Number of nodes with available pods: 0 Feb 20 14:07:44.526: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:45.519: INFO: Number of nodes with available pods: 0 Feb 20 14:07:45.519: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:46.533: INFO: Number of nodes with available pods: 0 Feb 20 14:07:46.533: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:47.513: INFO: Number of nodes with available pods: 0 Feb 20 14:07:47.513: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:48.521: INFO: Number of nodes with available pods: 0 Feb 20 14:07:48.522: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:49.511: INFO: Number of nodes with available pods: 0 Feb 20 14:07:49.511: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:50.518: INFO: Number of nodes with available pods: 0 Feb 20 14:07:50.518: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:51.510: INFO: Number of nodes with available pods: 0 Feb 20 14:07:51.511: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:52.521: INFO: Number of nodes with available pods: 0 Feb 20 14:07:52.521: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:53.513: INFO: Number of nodes with available pods: 0 Feb 20 14:07:53.513: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:54.516: INFO: Number of nodes with available pods: 0 Feb 20 14:07:54.516: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:55.509: INFO: Number of nodes with available pods: 0 Feb 20 14:07:55.509: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:56.551: INFO: Number of nodes with available pods: 0 Feb 20 14:07:56.551: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:57.513: INFO: Number of nodes with available pods: 0 Feb 20 14:07:57.513: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:58.520: INFO: Number of nodes with available pods: 0 Feb 20 14:07:58.520: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:07:59.514: INFO: Number of nodes with available pods: 0 Feb 20 14:07:59.514: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:08:00.538: INFO: Number of nodes with available pods: 0 Feb 20 14:08:00.538: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:08:01.511: INFO: Number of nodes with available pods: 0 Feb 20 14:08:01.511: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:08:02.609: INFO: Number of nodes with available pods: 0 Feb 20 14:08:02.609: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:08:03.517: INFO: Number of nodes with available pods: 0 Feb 20 14:08:03.517: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:08:04.552: INFO: Number of nodes with available pods: 1 Feb 20 14:08:04.552: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5290, will wait for the garbage collector to delete the pods Feb 20 14:08:04.639: INFO: Deleting DaemonSet.extensions daemon-set took: 17.578497ms Feb 20 14:08:04.940: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.328185ms Feb 20 14:08:16.691: INFO: Number of nodes with available pods: 0 Feb 20 14:08:16.691: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 14:08:16.697: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5290/daemonsets","resourceVersion":"25082470"},"items":null} Feb 20 14:08:16.709: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5290/pods","resourceVersion":"25082471"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:08:16.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5290" for this suite. Feb 20 14:08:22.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:08:22.963: INFO: namespace daemonsets-5290 deletion completed in 6.193269038s • [SLOW TEST:50.886 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:08:22.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:08:23.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6651' Feb 20 14:08:23.448: INFO: stderr: "" Feb 20 14:08:23.448: INFO: stdout: "replicationcontroller/redis-master created\n" Feb 20 14:08:23.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6651' Feb 20 14:08:23.972: INFO: stderr: "" Feb 20 14:08:23.973: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Feb 20 14:08:24.980: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:24.980: INFO: Found 0 / 1 Feb 20 14:08:25.981: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:25.981: INFO: Found 0 / 1 Feb 20 14:08:26.990: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:26.990: INFO: Found 0 / 1 Feb 20 14:08:27.980: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:27.980: INFO: Found 0 / 1 Feb 20 14:08:28.980: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:28.980: INFO: Found 0 / 1 Feb 20 14:08:29.985: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:29.985: INFO: Found 0 / 1 Feb 20 14:08:30.980: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:30.980: INFO: Found 1 / 1 Feb 20 14:08:30.980: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 14:08:30.985: INFO: Selector matched 1 pods for map[app:redis] Feb 20 14:08:30.985: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 20 14:08:30.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4drmf --namespace=kubectl-6651' Feb 20 14:08:31.139: INFO: stderr: "" Feb 20 14:08:31.139: INFO: stdout: "Name: redis-master-4drmf\nNamespace: kubectl-6651\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 20 Feb 2020 14:08:23 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://dcd8892b92072e2f611d70bc39a4bf4224b0f294491bbb7d026951227e55a4b1\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 20 Feb 2020 14:08:30 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nkp5f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nkp5f:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nkp5f\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-6651/redis-master-4drmf to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Feb 20 14:08:31.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6651' Feb 20 14:08:31.296: INFO: stderr: "" Feb 20 14:08:31.296: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6651\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-4drmf\n" Feb 20 14:08:31.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6651' Feb 20 14:08:31.424: INFO: stderr: "" Feb 20 14:08:31.424: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6651\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.154.26\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 20 14:08:31.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Feb 20 14:08:31.542: INFO: stderr: "" Feb 20 14:08:31.542: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 20 Feb 2020 14:07:57 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 20 Feb 2020 14:07:57 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 20 Feb 2020 14:07:57 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 20 Feb 2020 14:07:57 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 200d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 131d\n kubectl-6651 redis-master-4drmf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 20 14:08:31.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6651' Feb 20 14:08:31.624: INFO: stderr: "" Feb 20 14:08:31.624: INFO: stdout: "Name: kubectl-6651\nLabels: e2e-framework=kubectl\n e2e-run=e3751b31-7a0a-4595-8952-e717bf6923db\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:08:31.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6651" for this suite. Feb 20 14:08:53.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:08:53.792: INFO: namespace kubectl-6651 deletion completed in 22.162824533s • [SLOW TEST:30.829 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:08:53.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 20 14:08:53.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 20 14:08:54.101: INFO: stderr: "" Feb 20 14:08:54.101: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:08:54.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4236" for this suite. Feb 20 14:09:00.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:09:00.253: INFO: namespace kubectl-4236 deletion completed in 6.143034583s • [SLOW TEST:6.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:09:00.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 20 14:09:00.367: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:09:12.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7694" for this suite. Feb 20 14:09:19.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:09:19.222: INFO: namespace init-container-7694 deletion completed in 6.142285735s • [SLOW TEST:18.968 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:09:19.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a7f19614-c713-4dde-bd5f-55b58f41fd9e STEP: Creating a pod to test consume secrets Feb 20 14:09:19.362: INFO: Waiting up to 5m0s for pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d" in namespace "secrets-4619" to be "success or failure" Feb 20 14:09:19.389: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.44073ms Feb 20 14:09:21.396: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033322506s Feb 20 14:09:23.409: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04643915s Feb 20 14:09:25.416: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053823183s Feb 20 14:09:27.423: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061061589s STEP: Saw pod success Feb 20 14:09:27.424: INFO: Pod "pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d" satisfied condition "success or failure" Feb 20 14:09:27.427: INFO: Trying to get logs from node iruya-node pod pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d container secret-volume-test: STEP: delete the pod Feb 20 14:09:27.468: INFO: Waiting for pod pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d to disappear Feb 20 14:09:27.537: INFO: Pod pod-secrets-a1c3ac9e-70de-4bb4-9ce7-1677c6872d9d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:09:27.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4619" for this suite. Feb 20 14:09:33.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:09:33.721: INFO: namespace secrets-4619 deletion completed in 6.177846165s • [SLOW TEST:14.499 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:09:33.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 20 14:09:42.468: INFO: Successfully updated pod "labelsupdatedce69908-bad4-4803-99ac-3585f51d3ab0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:09:46.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2675" for this suite. Feb 20 14:10:10.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:10:10.798: INFO: namespace projected-2675 deletion completed in 24.146648249s • [SLOW TEST:37.077 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:10:10.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 14:10:10.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e" in namespace "downward-api-3078" to be "success or failure" Feb 20 14:10:10.977: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.557117ms Feb 20 14:10:12.984: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054395683s Feb 20 14:10:14.996: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065770918s Feb 20 14:10:17.040: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109845421s Feb 20 14:10:19.068: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137740927s STEP: Saw pod success Feb 20 14:10:19.068: INFO: Pod "downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e" satisfied condition "success or failure" Feb 20 14:10:19.078: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e container client-container: STEP: delete the pod Feb 20 14:10:19.150: INFO: Waiting for pod downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e to disappear Feb 20 14:10:19.155: INFO: Pod downwardapi-volume-41bc9599-1efe-449b-a96e-36804b6d236e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:10:19.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3078" for this suite. Feb 20 14:10:25.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:10:25.297: INFO: namespace downward-api-3078 deletion completed in 6.137910968s • [SLOW TEST:14.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:10:25.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 14:10:25.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769" in namespace "projected-5905" to be "success or failure" Feb 20 14:10:25.400: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377936ms Feb 20 14:10:27.413: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019192071s Feb 20 14:10:29.426: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03240561s Feb 20 14:10:31.438: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043666819s Feb 20 14:10:33.444: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050401568s Feb 20 14:10:35.462: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067772798s STEP: Saw pod success Feb 20 14:10:35.462: INFO: Pod "downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769" satisfied condition "success or failure" Feb 20 14:10:35.467: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769 container client-container: STEP: delete the pod Feb 20 14:10:35.661: INFO: Waiting for pod downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769 to disappear Feb 20 14:10:35.698: INFO: Pod downwardapi-volume-19f9b607-bb01-4871-b0a3-322db18dc769 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:10:35.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5905" for this suite. Feb 20 14:10:41.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:10:41.915: INFO: namespace projected-5905 deletion completed in 6.211523607s • [SLOW TEST:16.618 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:10:41.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 20 14:10:42.063: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:10:59.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9428" for this suite. Feb 20 14:11:21.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:11:21.723: INFO: namespace init-container-9428 deletion completed in 22.189003373s • [SLOW TEST:39.807 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:11:21.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-e362fc7c-2f64-4b09-8aac-83e0f0c331ed STEP: Creating configMap with name cm-test-opt-upd-dcefcb48-443b-4122-8915-18f884af5d0d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e362fc7c-2f64-4b09-8aac-83e0f0c331ed STEP: Updating configmap cm-test-opt-upd-dcefcb48-443b-4122-8915-18f884af5d0d STEP: Creating configMap with name cm-test-opt-create-6d6d0a85-d459-4261-903e-12327952bd92 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:11:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8101" for this suite. Feb 20 14:12:02.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:12:02.440: INFO: namespace projected-8101 deletion completed in 24.128757042s • [SLOW TEST:40.716 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:12:02.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 20 14:12:02.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8003' Feb 20 14:12:02.867: INFO: stderr: "" Feb 20 14:12:02.868: INFO: stdout: "pod/pause created\n" Feb 20 14:12:02.868: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 20 14:12:02.868: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8003" to be "running and ready" Feb 20 14:12:02.884: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.401093ms Feb 20 14:12:04.894: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026331684s Feb 20 14:12:06.901: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033190837s Feb 20 14:12:08.910: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04221054s Feb 20 14:12:10.923: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.055124893s Feb 20 14:12:10.923: INFO: Pod "pause" satisfied condition "running and ready" Feb 20 14:12:10.923: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 20 14:12:10.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8003' Feb 20 14:12:11.034: INFO: stderr: "" Feb 20 14:12:11.034: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 20 14:12:11.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8003' Feb 20 14:12:11.120: INFO: stderr: "" Feb 20 14:12:11.120: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 20 14:12:11.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8003' Feb 20 14:12:11.226: INFO: stderr: "" Feb 20 14:12:11.226: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 20 14:12:11.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8003' Feb 20 14:12:11.314: INFO: stderr: "" Feb 20 14:12:11.314: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 20 14:12:11.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8003' Feb 20 14:12:11.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 14:12:11.408: INFO: stdout: "pod \"pause\" force deleted\n" Feb 20 14:12:11.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8003' Feb 20 14:12:11.494: INFO: stderr: "No resources found.\n" Feb 20 14:12:11.494: INFO: stdout: "" Feb 20 14:12:11.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8003 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 14:12:11.560: INFO: stderr: "" Feb 20 14:12:11.560: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:12:11.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8003" for this suite. Feb 20 14:12:17.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:12:17.727: INFO: namespace kubectl-8003 deletion completed in 6.162875313s • [SLOW TEST:15.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:12:17.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:12:48.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2083" for this suite. Feb 20 14:12:54.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:12:54.288: INFO: namespace namespaces-2083 deletion completed in 6.149268647s STEP: Destroying namespace "nsdeletetest-4241" for this suite. Feb 20 14:12:54.290: INFO: Namespace nsdeletetest-4241 was already deleted STEP: Destroying namespace "nsdeletetest-6851" for this suite. Feb 20 14:13:00.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:13:00.432: INFO: namespace nsdeletetest-6851 deletion completed in 6.142695738s • [SLOW TEST:42.705 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:13:00.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:13:00.508: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 20 14:13:00.586: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 20 14:13:05.595: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 14:13:09.612: INFO: Creating deployment "test-rolling-update-deployment" Feb 20 14:13:09.621: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 20 14:13:09.651: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 20 14:13:11.666: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 20 14:13:11.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 14:13:13.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 14:13:15.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717804789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 14:13:17.687: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 20 14:13:17.707: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7015,SelfLink:/apis/apps/v1/namespaces/deployment-7015/deployments/test-rolling-update-deployment,UID:ce6f0f20-4962-478c-b4f0-a82199fe1497,ResourceVersion:25083259,Generation:1,CreationTimestamp:2020-02-20 14:13:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-20 14:13:09 +0000 UTC 2020-02-20 14:13:09 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-20 14:13:16 +0000 UTC 2020-02-20 14:13:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 20 14:13:17.712: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7015,SelfLink:/apis/apps/v1/namespaces/deployment-7015/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:ca8b3e48-6841-4d55-a087-6b2ed8374674,ResourceVersion:25083246,Generation:1,CreationTimestamp:2020-02-20 14:13:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ce6f0f20-4962-478c-b4f0-a82199fe1497 0xc002511c97 0xc002511c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 20 14:13:17.712: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 20 14:13:17.712: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7015,SelfLink:/apis/apps/v1/namespaces/deployment-7015/replicasets/test-rolling-update-controller,UID:bddcb902-53ac-4a99-93a3-714698dbfb55,ResourceVersion:25083257,Generation:2,CreationTimestamp:2020-02-20 14:13:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ce6f0f20-4962-478c-b4f0-a82199fe1497 0xc002511baf 0xc002511bc0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 14:13:17.718: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-hdnhm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-hdnhm,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7015,SelfLink:/api/v1/namespaces/deployment-7015/pods/test-rolling-update-deployment-79f6b9d75c-hdnhm,UID:29065782-a17d-4d90-a143-0c08e4e74148,ResourceVersion:25083245,Generation:0,CreationTimestamp:2020-02-20 14:13:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c ca8b3e48-6841-4d55-a087-6b2ed8374674 0xc001b70017 0xc001b70018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kglkc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kglkc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kglkc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b70090} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b700b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 14:13:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 14:13:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 14:13:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 14:13:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-20 14:13:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-20 14:13:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e48a512e6f98b10d433f14f445c11ddfc1be496dc4d151a4d28d4ffcb580a789}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:13:17.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7015" for this suite. Feb 20 14:13:23.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:13:23.934: INFO: namespace deployment-7015 deletion completed in 6.206320919s • [SLOW TEST:23.501 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:13:23.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3521.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3521.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3521.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.169.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.169.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.169.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.169.199_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3521.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3521.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3521.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3521.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.169.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.169.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.169.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.169.199_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 14:13:36.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.296: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.331: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.341: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.347: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.355: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.358: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.362: INFO: Unable to read 10.106.169.199_udp@PTR from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.376: INFO: Unable to read 10.106.169.199_tcp@PTR from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.385: INFO: Unable to read jessie_udp@dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.397: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.403: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.407: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.411: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.416: INFO: Unable to read jessie_udp@PodARecord from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.419: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.422: INFO: Unable to read 10.106.169.199_udp@PTR from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.424: INFO: Unable to read 10.106.169.199_tcp@PTR from pod dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9: the server could not find the requested resource (get pods dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9) Feb 20 14:13:36.425: INFO: Lookups using dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9 failed for: [wheezy_udp@dns-test-service.dns-3521.svc.cluster.local wheezy_tcp@dns-test-service.dns-3521.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.169.199_udp@PTR 10.106.169.199_tcp@PTR jessie_udp@dns-test-service.dns-3521.svc.cluster.local jessie_tcp@dns-test-service.dns-3521.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3521.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3521.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3521.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.169.199_udp@PTR 10.106.169.199_tcp@PTR] Feb 20 14:13:41.537: INFO: DNS probes using dns-3521/dns-test-097dd73d-de69-443e-84e9-6b7c453f08d9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:13:41.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3521" for this suite. Feb 20 14:13:47.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:13:48.003: INFO: namespace dns-3521 deletion completed in 6.237854566s • [SLOW TEST:24.067 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:13:48.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 20 14:14:06.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:06.296: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:08.296: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:08.309: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:10.296: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:10.305: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:12.296: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:12.305: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:14.296: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:14.307: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:16.296: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:16.305: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 14:14:18.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 14:14:18.309: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:14:18.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2761" for this suite. Feb 20 14:14:40.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:14:40.442: INFO: namespace container-lifecycle-hook-2761 deletion completed in 22.123105698s • [SLOW TEST:52.439 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:14:40.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2099631c-a4a6-4a87-b188-198579a6d328 STEP: Creating a pod to test consume configMaps Feb 20 14:14:40.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c" in namespace "projected-1285" to be "success or failure" Feb 20 14:14:40.612: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.691927ms Feb 20 14:14:42.626: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027544072s Feb 20 14:14:44.663: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064773444s Feb 20 14:14:46.672: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073915497s Feb 20 14:14:48.684: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085241424s STEP: Saw pod success Feb 20 14:14:48.684: INFO: Pod "pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c" satisfied condition "success or failure" Feb 20 14:14:48.689: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c container projected-configmap-volume-test: STEP: delete the pod Feb 20 14:14:48.759: INFO: Waiting for pod pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c to disappear Feb 20 14:14:48.764: INFO: Pod pod-projected-configmaps-e4035d6d-ed2f-4bc9-84dc-2d6506bd288c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:14:48.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1285" for this suite. Feb 20 14:14:54.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:14:55.005: INFO: namespace projected-1285 deletion completed in 6.235370871s • [SLOW TEST:14.563 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:14:55.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 20 14:15:04.200: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:15:05.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-244" for this suite. Feb 20 14:15:27.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:15:27.412: INFO: namespace replicaset-244 deletion completed in 22.163888352s • [SLOW TEST:32.406 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:15:27.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8762 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8762 STEP: Deleting pre-stop pod Feb 20 14:15:52.645: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:15:52.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8762" for this suite. Feb 20 14:16:30.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:16:30.888: INFO: namespace prestop-8762 deletion completed in 38.191913794s • [SLOW TEST:63.476 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:16:30.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 20 14:16:31.050: INFO: Waiting up to 5m0s for pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7" in namespace "downward-api-5912" to be "success or failure" Feb 20 14:16:31.068: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.722274ms Feb 20 14:16:33.075: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025570216s Feb 20 14:16:35.084: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033788267s Feb 20 14:16:37.090: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040364774s Feb 20 14:16:39.098: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048480021s STEP: Saw pod success Feb 20 14:16:39.098: INFO: Pod "downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7" satisfied condition "success or failure" Feb 20 14:16:39.103: INFO: Trying to get logs from node iruya-node pod downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7 container dapi-container: STEP: delete the pod Feb 20 14:16:39.186: INFO: Waiting for pod downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7 to disappear Feb 20 14:16:39.256: INFO: Pod downward-api-0d4bd33c-bb4b-44ca-929d-81581246dfc7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:16:39.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5912" for this suite. Feb 20 14:16:45.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:16:45.493: INFO: namespace downward-api-5912 deletion completed in 6.23240661s • [SLOW TEST:14.604 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:16:45.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 14:16:57.712: INFO: File wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-df24c931-ef10-4eda-aa33-6b7f5e225926 contains '' instead of 'foo.example.com.' Feb 20 14:16:57.721: INFO: File jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-df24c931-ef10-4eda-aa33-6b7f5e225926 contains '' instead of 'foo.example.com.' Feb 20 14:16:57.721: INFO: Lookups using dns-9829/dns-test-df24c931-ef10-4eda-aa33-6b7f5e225926 failed for: [wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local] Feb 20 14:17:02.741: INFO: DNS probes using dns-test-df24c931-ef10-4eda-aa33-6b7f5e225926 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 14:17:17.088: INFO: File wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains '' instead of 'bar.example.com.' Feb 20 14:17:17.096: INFO: File jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains '' instead of 'bar.example.com.' Feb 20 14:17:17.096: INFO: Lookups using dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad failed for: [wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local] Feb 20 14:17:22.748: INFO: File wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 20 14:17:22.757: INFO: File jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 20 14:17:22.757: INFO: Lookups using dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad failed for: [wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local] Feb 20 14:17:27.112: INFO: File wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 20 14:17:27.121: INFO: File jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 20 14:17:27.121: INFO: Lookups using dns-9829/dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad failed for: [wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local] Feb 20 14:17:32.113: INFO: DNS probes using dns-test-89f68c25-79ad-4a66-8599-b47d4f0c4bad succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9829.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 14:17:46.465: INFO: File wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-5627c7fb-cba8-48a9-93f3-78d4050b82a4 contains '' instead of '10.98.73.255' Feb 20 14:17:46.503: INFO: File jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local from pod dns-9829/dns-test-5627c7fb-cba8-48a9-93f3-78d4050b82a4 contains '' instead of '10.98.73.255' Feb 20 14:17:46.503: INFO: Lookups using dns-9829/dns-test-5627c7fb-cba8-48a9-93f3-78d4050b82a4 failed for: [wheezy_udp@dns-test-service-3.dns-9829.svc.cluster.local jessie_udp@dns-test-service-3.dns-9829.svc.cluster.local] Feb 20 14:17:51.527: INFO: DNS probes using dns-test-5627c7fb-cba8-48a9-93f3-78d4050b82a4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:17:51.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9829" for this suite. Feb 20 14:17:57.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:17:57.944: INFO: namespace dns-9829 deletion completed in 6.213518402s • [SLOW TEST:72.451 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:17:57.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 20 14:17:58.038: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 20 14:17:58.057: INFO: Waiting for terminating namespaces to be deleted... Feb 20 14:17:58.060: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 20 14:17:58.073: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.073: INFO: Container kube-bench ready: false, restart count 0 Feb 20 14:17:58.073: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.073: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 14:17:58.073: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 20 14:17:58.073: INFO: Container weave ready: true, restart count 0 Feb 20 14:17:58.073: INFO: Container weave-npc ready: true, restart count 0 Feb 20 14:17:58.073: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 20 14:17:58.087: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container kube-scheduler ready: true, restart count 15 Feb 20 14:17:58.087: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container coredns ready: true, restart count 0 Feb 20 14:17:58.087: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container etcd ready: true, restart count 0 Feb 20 14:17:58.087: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 20 14:17:58.087: INFO: Container weave ready: true, restart count 0 Feb 20 14:17:58.087: INFO: Container weave-npc ready: true, restart count 0 Feb 20 14:17:58.087: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container coredns ready: true, restart count 0 Feb 20 14:17:58.087: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 20 14:17:58.087: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 14:17:58.087: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 20 14:17:58.087: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f5220bc986f726], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:17:59.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5758" for this suite. Feb 20 14:18:05.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:18:05.285: INFO: namespace sched-pred-5758 deletion completed in 6.166409247s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.339 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:18:05.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0220 14:18:36.007694 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 14:18:36.007: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:18:36.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4267" for this suite. Feb 20 14:18:42.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:18:42.127: INFO: namespace gc-4267 deletion completed in 6.115927137s • [SLOW TEST:36.842 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:18:42.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 20 14:18:53.519: INFO: Successfully updated pod "labelsupdate6ff9209a-3b7a-48f6-9f29-0e421a77a352" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:18:55.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9333" for this suite. Feb 20 14:19:17.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:19:17.759: INFO: namespace downward-api-9333 deletion completed in 22.148937305s • [SLOW TEST:35.631 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:19:17.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 20 14:19:28.468: INFO: Successfully updated pod "annotationupdate59e7595b-e2a1-47b5-975e-10d203740a80" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:19:30.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6463" for this suite. Feb 20 14:19:52.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:19:52.714: INFO: namespace downward-api-6463 deletion completed in 22.130502084s • [SLOW TEST:34.955 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:19:52.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:20:00.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6767" for this suite. Feb 20 14:20:44.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:20:45.129: INFO: namespace kubelet-test-6767 deletion completed in 44.172916156s • [SLOW TEST:52.415 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:20:45.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 20 14:20:45.252: INFO: Waiting up to 5m0s for pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882" in namespace "emptydir-3546" to be "success or failure" Feb 20 14:20:45.292: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882": Phase="Pending", Reason="", readiness=false. Elapsed: 39.754062ms Feb 20 14:20:47.303: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050525541s Feb 20 14:20:49.313: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061096259s Feb 20 14:20:51.325: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073067058s Feb 20 14:20:53.332: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079346326s STEP: Saw pod success Feb 20 14:20:53.332: INFO: Pod "pod-d880cc57-ccd8-49b0-b991-6c79b521f882" satisfied condition "success or failure" Feb 20 14:20:53.336: INFO: Trying to get logs from node iruya-node pod pod-d880cc57-ccd8-49b0-b991-6c79b521f882 container test-container: STEP: delete the pod Feb 20 14:20:53.493: INFO: Waiting for pod pod-d880cc57-ccd8-49b0-b991-6c79b521f882 to disappear Feb 20 14:20:53.505: INFO: Pod pod-d880cc57-ccd8-49b0-b991-6c79b521f882 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:20:53.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3546" for this suite. Feb 20 14:20:59.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:20:59.711: INFO: namespace emptydir-3546 deletion completed in 6.171187764s • [SLOW TEST:14.581 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:20:59.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 20 14:20:59.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1460,SelfLink:/api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed,UID:8df34ea5-9261-47e9-ad9a-d8c76d9a2a3a,ResourceVersion:25084423,Generation:0,CreationTimestamp:2020-02-20 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 20 14:20:59.902: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1460,SelfLink:/api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed,UID:8df34ea5-9261-47e9-ad9a-d8c76d9a2a3a,ResourceVersion:25084424,Generation:0,CreationTimestamp:2020-02-20 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 20 14:20:59.921: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1460,SelfLink:/api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed,UID:8df34ea5-9261-47e9-ad9a-d8c76d9a2a3a,ResourceVersion:25084425,Generation:0,CreationTimestamp:2020-02-20 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 14:20:59.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1460,SelfLink:/api/v1/namespaces/watch-1460/configmaps/e2e-watch-test-watch-closed,UID:8df34ea5-9261-47e9-ad9a-d8c76d9a2a3a,ResourceVersion:25084426,Generation:0,CreationTimestamp:2020-02-20 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:20:59.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1460" for this suite. Feb 20 14:21:05.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:21:06.105: INFO: namespace watch-1460 deletion completed in 6.180387965s • [SLOW TEST:6.393 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:21:06.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Feb 20 14:21:06.268: INFO: Waiting up to 5m0s for pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08" in namespace "containers-6653" to be "success or failure" Feb 20 14:21:06.296: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08": Phase="Pending", Reason="", readiness=false. Elapsed: 28.035786ms Feb 20 14:21:08.313: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04469271s Feb 20 14:21:10.321: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053455936s Feb 20 14:21:12.327: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059212747s Feb 20 14:21:14.339: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071019234s STEP: Saw pod success Feb 20 14:21:14.339: INFO: Pod "client-containers-090e0937-1d6b-431a-8ece-f940954ead08" satisfied condition "success or failure" Feb 20 14:21:14.345: INFO: Trying to get logs from node iruya-node pod client-containers-090e0937-1d6b-431a-8ece-f940954ead08 container test-container: STEP: delete the pod Feb 20 14:21:14.434: INFO: Waiting for pod client-containers-090e0937-1d6b-431a-8ece-f940954ead08 to disappear Feb 20 14:21:14.438: INFO: Pod client-containers-090e0937-1d6b-431a-8ece-f940954ead08 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:21:14.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6653" for this suite. Feb 20 14:21:20.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:21:20.656: INFO: namespace containers-6653 deletion completed in 6.170921609s • [SLOW TEST:14.551 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:21:20.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 20 14:21:20.781: INFO: Waiting up to 5m0s for pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203" in namespace "emptydir-3641" to be "success or failure" Feb 20 14:21:20.789: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219262ms Feb 20 14:21:22.804: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023077377s Feb 20 14:21:24.834: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052778276s Feb 20 14:21:26.842: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061316205s Feb 20 14:21:28.854: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073009686s STEP: Saw pod success Feb 20 14:21:28.854: INFO: Pod "pod-d6f13508-89a9-47f2-a9a3-2b254cea8203" satisfied condition "success or failure" Feb 20 14:21:28.863: INFO: Trying to get logs from node iruya-node pod pod-d6f13508-89a9-47f2-a9a3-2b254cea8203 container test-container: STEP: delete the pod Feb 20 14:21:28.928: INFO: Waiting for pod pod-d6f13508-89a9-47f2-a9a3-2b254cea8203 to disappear Feb 20 14:21:28.936: INFO: Pod pod-d6f13508-89a9-47f2-a9a3-2b254cea8203 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:21:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3641" for this suite. Feb 20 14:21:34.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:21:35.131: INFO: namespace emptydir-3641 deletion completed in 6.168186416s • [SLOW TEST:14.474 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:21:35.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2557 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 20 14:21:35.261: INFO: Found 0 stateful pods, waiting for 3 Feb 20 14:21:45.364: INFO: Found 2 stateful pods, waiting for 3 Feb 20 14:21:55.275: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:21:55.275: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:21:55.275: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 14:22:05.309: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:22:05.309: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:22:05.309: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 20 14:22:05.351: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 20 14:22:15.423: INFO: Updating stateful set ss2 Feb 20 14:22:15.503: INFO: Waiting for Pod statefulset-2557/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 20 14:22:25.929: INFO: Found 2 stateful pods, waiting for 3 Feb 20 14:22:35.938: INFO: Found 2 stateful pods, waiting for 3 Feb 20 14:22:45.938: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:22:45.938: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 14:22:45.938: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 20 14:22:45.970: INFO: Updating stateful set ss2 Feb 20 14:22:46.023: INFO: Waiting for Pod statefulset-2557/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 14:22:56.067: INFO: Waiting for Pod statefulset-2557/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 14:23:06.086: INFO: Updating stateful set ss2 Feb 20 14:23:06.124: INFO: Waiting for StatefulSet statefulset-2557/ss2 to complete update Feb 20 14:23:06.124: INFO: Waiting for Pod statefulset-2557/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 14:23:16.148: INFO: Waiting for StatefulSet statefulset-2557/ss2 to complete update Feb 20 14:23:16.148: INFO: Waiting for Pod statefulset-2557/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 20 14:23:26.159: INFO: Deleting all statefulset in ns statefulset-2557 Feb 20 14:23:26.165: INFO: Scaling statefulset ss2 to 0 Feb 20 14:24:06.194: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 14:24:06.197: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:24:06.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2557" for this suite. Feb 20 14:24:14.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:24:14.390: INFO: namespace statefulset-2557 deletion completed in 8.165608546s • [SLOW TEST:159.259 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:24:14.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 20 14:24:14.650: INFO: Number of nodes with available pods: 0 Feb 20 14:24:14.650: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:16.405: INFO: Number of nodes with available pods: 0 Feb 20 14:24:16.405: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:16.666: INFO: Number of nodes with available pods: 0 Feb 20 14:24:16.666: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:17.825: INFO: Number of nodes with available pods: 0 Feb 20 14:24:17.825: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:18.677: INFO: Number of nodes with available pods: 0 Feb 20 14:24:18.677: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:19.662: INFO: Number of nodes with available pods: 0 Feb 20 14:24:19.662: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:21.792: INFO: Number of nodes with available pods: 0 Feb 20 14:24:21.793: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:22.666: INFO: Number of nodes with available pods: 0 Feb 20 14:24:22.666: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:23.715: INFO: Number of nodes with available pods: 0 Feb 20 14:24:23.715: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:24.664: INFO: Number of nodes with available pods: 0 Feb 20 14:24:24.664: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:25.666: INFO: Number of nodes with available pods: 2 Feb 20 14:24:25.666: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 20 14:24:25.714: INFO: Number of nodes with available pods: 1 Feb 20 14:24:25.714: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:26.727: INFO: Number of nodes with available pods: 1 Feb 20 14:24:26.727: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:27.733: INFO: Number of nodes with available pods: 1 Feb 20 14:24:27.733: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:28.735: INFO: Number of nodes with available pods: 1 Feb 20 14:24:28.736: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:29.730: INFO: Number of nodes with available pods: 1 Feb 20 14:24:29.730: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:30.732: INFO: Number of nodes with available pods: 1 Feb 20 14:24:30.732: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:31.728: INFO: Number of nodes with available pods: 1 Feb 20 14:24:31.728: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:32.732: INFO: Number of nodes with available pods: 1 Feb 20 14:24:32.732: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:33.731: INFO: Number of nodes with available pods: 1 Feb 20 14:24:33.731: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:34.757: INFO: Number of nodes with available pods: 1 Feb 20 14:24:34.757: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:35.728: INFO: Number of nodes with available pods: 1 Feb 20 14:24:35.728: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:36.738: INFO: Number of nodes with available pods: 1 Feb 20 14:24:36.738: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:37.732: INFO: Number of nodes with available pods: 1 Feb 20 14:24:37.732: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:38.729: INFO: Number of nodes with available pods: 1 Feb 20 14:24:38.729: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:39.740: INFO: Number of nodes with available pods: 1 Feb 20 14:24:39.740: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:40.740: INFO: Number of nodes with available pods: 1 Feb 20 14:24:40.740: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:41.729: INFO: Number of nodes with available pods: 1 Feb 20 14:24:41.729: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:42.729: INFO: Number of nodes with available pods: 1 Feb 20 14:24:42.729: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:43.733: INFO: Number of nodes with available pods: 1 Feb 20 14:24:43.733: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:24:44.815: INFO: Number of nodes with available pods: 2 Feb 20 14:24:44.815: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4481, will wait for the garbage collector to delete the pods Feb 20 14:24:44.955: INFO: Deleting DaemonSet.extensions daemon-set took: 78.951137ms Feb 20 14:24:45.255: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.402279ms Feb 20 14:24:57.962: INFO: Number of nodes with available pods: 0 Feb 20 14:24:57.962: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 14:24:57.965: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4481/daemonsets","resourceVersion":"25085106"},"items":null} Feb 20 14:24:57.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4481/pods","resourceVersion":"25085106"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:24:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4481" for this suite. Feb 20 14:25:04.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:25:04.094: INFO: namespace daemonsets-4481 deletion completed in 6.109901202s • [SLOW TEST:49.703 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:25:04.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:25:04.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 20 14:25:04.280: INFO: stderr: "" Feb 20 14:25:04.280: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:25:04.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5807" for this suite. Feb 20 14:25:10.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:25:10.409: INFO: namespace kubectl-5807 deletion completed in 6.114396221s • [SLOW TEST:6.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:25:10.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 20 14:25:10.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f" in namespace "downward-api-8990" to be "success or failure" Feb 20 14:25:10.591: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282756ms Feb 20 14:25:12.602: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020540636s Feb 20 14:25:14.608: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02605831s Feb 20 14:25:16.619: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037228902s Feb 20 14:25:18.639: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057081608s STEP: Saw pod success Feb 20 14:25:18.639: INFO: Pod "downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f" satisfied condition "success or failure" Feb 20 14:25:18.650: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f container client-container: STEP: delete the pod Feb 20 14:25:21.287: INFO: Waiting for pod downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f to disappear Feb 20 14:25:21.298: INFO: Pod downwardapi-volume-f42413c5-d747-49f6-91df-297d43be797f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:25:21.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8990" for this suite. Feb 20 14:25:27.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:25:27.550: INFO: namespace downward-api-8990 deletion completed in 6.242480253s • [SLOW TEST:17.140 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:25:27.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 20 14:25:27.678: INFO: Waiting up to 5m0s for pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd" in namespace "emptydir-9244" to be "success or failure" Feb 20 14:25:27.694: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.219384ms Feb 20 14:25:29.711: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033519474s Feb 20 14:25:31.725: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047069657s Feb 20 14:25:33.735: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056985609s Feb 20 14:25:35.753: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075399422s Feb 20 14:25:37.764: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086256868s STEP: Saw pod success Feb 20 14:25:37.764: INFO: Pod "pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd" satisfied condition "success or failure" Feb 20 14:25:37.770: INFO: Trying to get logs from node iruya-node pod pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd container test-container: STEP: delete the pod Feb 20 14:25:37.973: INFO: Waiting for pod pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd to disappear Feb 20 14:25:37.978: INFO: Pod pod-ed9d23f4-d038-4ccd-9e59-108915c53dcd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:25:37.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9244" for this suite. Feb 20 14:25:44.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:25:44.107: INFO: namespace emptydir-9244 deletion completed in 6.122576292s • [SLOW TEST:16.557 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:25:44.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 20 14:25:44.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2505' Feb 20 14:25:46.738: INFO: stderr: "" Feb 20 14:25:46.738: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 14:25:46.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2505' Feb 20 14:25:47.024: INFO: stderr: "" Feb 20 14:25:47.024: INFO: stdout: "update-demo-nautilus-2sljq update-demo-nautilus-42svd " Feb 20 14:25:47.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sljq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:47.179: INFO: stderr: "" Feb 20 14:25:47.179: INFO: stdout: "" Feb 20 14:25:47.179: INFO: update-demo-nautilus-2sljq is created but not running Feb 20 14:25:52.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2505' Feb 20 14:25:53.453: INFO: stderr: "" Feb 20 14:25:53.453: INFO: stdout: "update-demo-nautilus-2sljq update-demo-nautilus-42svd " Feb 20 14:25:53.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sljq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:53.925: INFO: stderr: "" Feb 20 14:25:53.925: INFO: stdout: "" Feb 20 14:25:53.925: INFO: update-demo-nautilus-2sljq is created but not running Feb 20 14:25:58.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2505' Feb 20 14:25:59.095: INFO: stderr: "" Feb 20 14:25:59.095: INFO: stdout: "update-demo-nautilus-2sljq update-demo-nautilus-42svd " Feb 20 14:25:59.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sljq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:59.181: INFO: stderr: "" Feb 20 14:25:59.181: INFO: stdout: "true" Feb 20 14:25:59.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sljq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:59.303: INFO: stderr: "" Feb 20 14:25:59.303: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 14:25:59.303: INFO: validating pod update-demo-nautilus-2sljq Feb 20 14:25:59.341: INFO: got data: { "image": "nautilus.jpg" } Feb 20 14:25:59.341: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 14:25:59.341: INFO: update-demo-nautilus-2sljq is verified up and running Feb 20 14:25:59.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42svd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:59.415: INFO: stderr: "" Feb 20 14:25:59.415: INFO: stdout: "true" Feb 20 14:25:59.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42svd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2505' Feb 20 14:25:59.501: INFO: stderr: "" Feb 20 14:25:59.501: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 14:25:59.501: INFO: validating pod update-demo-nautilus-42svd Feb 20 14:25:59.517: INFO: got data: { "image": "nautilus.jpg" } Feb 20 14:25:59.518: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 14:25:59.518: INFO: update-demo-nautilus-42svd is verified up and running STEP: using delete to clean up resources Feb 20 14:25:59.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2505' Feb 20 14:25:59.660: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 14:25:59.660: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 20 14:25:59.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2505' Feb 20 14:25:59.760: INFO: stderr: "No resources found.\n" Feb 20 14:25:59.760: INFO: stdout: "" Feb 20 14:25:59.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2505 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 14:26:00.060: INFO: stderr: "" Feb 20 14:26:00.060: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:26:00.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2505" for this suite. Feb 20 14:26:24.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:26:24.805: INFO: namespace kubectl-2505 deletion completed in 24.737258037s • [SLOW TEST:40.697 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:26:24.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 14:26:24.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3988' Feb 20 14:26:25.081: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 14:26:25.081: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Feb 20 14:26:27.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3988' Feb 20 14:26:27.230: INFO: stderr: "" Feb 20 14:26:27.230: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:26:27.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3988" for this suite. Feb 20 14:26:33.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:26:33.441: INFO: namespace kubectl-3988 deletion completed in 6.206239878s • [SLOW TEST:8.635 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:26:33.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 20 14:26:33.524: INFO: Waiting up to 5m0s for pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b" in namespace "var-expansion-2088" to be "success or failure" Feb 20 14:26:33.527: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.581666ms Feb 20 14:26:35.538: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014520324s Feb 20 14:26:37.544: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020254963s Feb 20 14:26:39.551: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026928185s Feb 20 14:26:41.560: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035783903s STEP: Saw pod success Feb 20 14:26:41.560: INFO: Pod "var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b" satisfied condition "success or failure" Feb 20 14:26:41.565: INFO: Trying to get logs from node iruya-node pod var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b container dapi-container: STEP: delete the pod Feb 20 14:26:41.827: INFO: Waiting for pod var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b to disappear Feb 20 14:26:41.850: INFO: Pod var-expansion-32f0c9b7-139d-4dbc-b397-d63952eaf14b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:26:41.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2088" for this suite. Feb 20 14:26:47.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:26:48.088: INFO: namespace var-expansion-2088 deletion completed in 6.229187556s • [SLOW TEST:14.647 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:26:48.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 14:26:48.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7518' Feb 20 14:26:48.381: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 14:26:48.382: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Feb 20 14:26:50.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7518' Feb 20 14:26:50.607: INFO: stderr: "" Feb 20 14:26:50.607: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:26:50.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7518" for this suite. Feb 20 14:26:56.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:26:56.782: INFO: namespace kubectl-7518 deletion completed in 6.146508614s • [SLOW TEST:8.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:26:56.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ddb75d5a-150d-4c3b-b51f-a95e778fd9e5 STEP: Creating a pod to test consume secrets Feb 20 14:26:57.035: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8" in namespace "projected-258" to be "success or failure" Feb 20 14:26:57.065: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.550194ms Feb 20 14:26:59.073: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038135248s Feb 20 14:27:01.080: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045171536s Feb 20 14:27:03.089: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053739075s Feb 20 14:27:05.100: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065099447s STEP: Saw pod success Feb 20 14:27:05.100: INFO: Pod "pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8" satisfied condition "success or failure" Feb 20 14:27:05.105: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8 container projected-secret-volume-test: STEP: delete the pod Feb 20 14:27:05.183: INFO: Waiting for pod pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8 to disappear Feb 20 14:27:05.290: INFO: Pod pod-projected-secrets-064a923c-f6cf-4f39-bcde-33e8f411b4e8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:27:05.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-258" for this suite. Feb 20 14:27:11.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:27:11.488: INFO: namespace projected-258 deletion completed in 6.184870765s • [SLOW TEST:14.703 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:27:11.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Feb 20 14:27:11.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8838' Feb 20 14:27:11.940: INFO: stderr: "" Feb 20 14:27:11.940: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 14:27:11.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8838' Feb 20 14:27:12.203: INFO: stderr: "" Feb 20 14:27:12.203: INFO: stdout: "update-demo-nautilus-549fk update-demo-nautilus-ds6hm " Feb 20 14:27:12.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-549fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:12.418: INFO: stderr: "" Feb 20 14:27:12.419: INFO: stdout: "" Feb 20 14:27:12.419: INFO: update-demo-nautilus-549fk is created but not running Feb 20 14:27:17.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8838' Feb 20 14:27:18.868: INFO: stderr: "" Feb 20 14:27:18.868: INFO: stdout: "update-demo-nautilus-549fk update-demo-nautilus-ds6hm " Feb 20 14:27:18.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-549fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:19.380: INFO: stderr: "" Feb 20 14:27:19.380: INFO: stdout: "" Feb 20 14:27:19.380: INFO: update-demo-nautilus-549fk is created but not running Feb 20 14:27:24.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8838' Feb 20 14:27:24.549: INFO: stderr: "" Feb 20 14:27:24.549: INFO: stdout: "update-demo-nautilus-549fk update-demo-nautilus-ds6hm " Feb 20 14:27:24.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-549fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:24.683: INFO: stderr: "" Feb 20 14:27:24.683: INFO: stdout: "true" Feb 20 14:27:24.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-549fk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:24.770: INFO: stderr: "" Feb 20 14:27:24.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 14:27:24.770: INFO: validating pod update-demo-nautilus-549fk Feb 20 14:27:24.780: INFO: got data: { "image": "nautilus.jpg" } Feb 20 14:27:24.780: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 14:27:24.780: INFO: update-demo-nautilus-549fk is verified up and running Feb 20 14:27:24.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ds6hm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:24.903: INFO: stderr: "" Feb 20 14:27:24.903: INFO: stdout: "true" Feb 20 14:27:24.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ds6hm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:25.031: INFO: stderr: "" Feb 20 14:27:25.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 14:27:25.031: INFO: validating pod update-demo-nautilus-ds6hm Feb 20 14:27:25.049: INFO: got data: { "image": "nautilus.jpg" } Feb 20 14:27:25.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 14:27:25.049: INFO: update-demo-nautilus-ds6hm is verified up and running STEP: rolling-update to new replication controller Feb 20 14:27:25.053: INFO: scanned /root for discovery docs: Feb 20 14:27:25.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8838' Feb 20 14:27:57.693: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 20 14:27:57.693: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 14:27:57.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8838' Feb 20 14:27:57.900: INFO: stderr: "" Feb 20 14:27:57.900: INFO: stdout: "update-demo-kitten-9jqrv update-demo-kitten-g4bs2 " Feb 20 14:27:57.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9jqrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:58.039: INFO: stderr: "" Feb 20 14:27:58.039: INFO: stdout: "true" Feb 20 14:27:58.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9jqrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:58.220: INFO: stderr: "" Feb 20 14:27:58.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 14:27:58.220: INFO: validating pod update-demo-kitten-9jqrv Feb 20 14:27:58.259: INFO: got data: { "image": "kitten.jpg" } Feb 20 14:27:58.259: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 14:27:58.259: INFO: update-demo-kitten-9jqrv is verified up and running Feb 20 14:27:58.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4bs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:58.346: INFO: stderr: "" Feb 20 14:27:58.346: INFO: stdout: "true" Feb 20 14:27:58.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4bs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8838' Feb 20 14:27:58.439: INFO: stderr: "" Feb 20 14:27:58.439: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 14:27:58.439: INFO: validating pod update-demo-kitten-g4bs2 Feb 20 14:27:58.462: INFO: got data: { "image": "kitten.jpg" } Feb 20 14:27:58.462: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 14:27:58.462: INFO: update-demo-kitten-g4bs2 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:27:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8838" for this suite. Feb 20 14:28:24.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:28:24.630: INFO: namespace kubectl-8838 deletion completed in 26.159788941s • [SLOW TEST:73.142 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:28:24.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 20 14:28:24.846: INFO: Number of nodes with available pods: 0 Feb 20 14:28:24.846: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:25.868: INFO: Number of nodes with available pods: 0 Feb 20 14:28:25.868: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:27.263: INFO: Number of nodes with available pods: 0 Feb 20 14:28:27.264: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:27.872: INFO: Number of nodes with available pods: 0 Feb 20 14:28:27.872: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:28.860: INFO: Number of nodes with available pods: 0 Feb 20 14:28:28.860: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:29.900: INFO: Number of nodes with available pods: 0 Feb 20 14:28:29.900: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:31.902: INFO: Number of nodes with available pods: 0 Feb 20 14:28:31.902: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:33.358: INFO: Number of nodes with available pods: 0 Feb 20 14:28:33.359: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:33.868: INFO: Number of nodes with available pods: 1 Feb 20 14:28:33.868: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 20 14:28:34.865: INFO: Number of nodes with available pods: 2 Feb 20 14:28:34.865: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 20 14:28:34.992: INFO: Number of nodes with available pods: 1 Feb 20 14:28:34.992: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:36.008: INFO: Number of nodes with available pods: 1 Feb 20 14:28:36.008: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:37.006: INFO: Number of nodes with available pods: 1 Feb 20 14:28:37.006: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:38.060: INFO: Number of nodes with available pods: 1 Feb 20 14:28:38.060: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:39.008: INFO: Number of nodes with available pods: 1 Feb 20 14:28:39.008: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:40.005: INFO: Number of nodes with available pods: 1 Feb 20 14:28:40.005: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:41.006: INFO: Number of nodes with available pods: 1 Feb 20 14:28:41.006: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:42.015: INFO: Number of nodes with available pods: 1 Feb 20 14:28:42.016: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:43.004: INFO: Number of nodes with available pods: 1 Feb 20 14:28:43.004: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:44.022: INFO: Number of nodes with available pods: 1 Feb 20 14:28:44.022: INFO: Node iruya-node is running more than one daemon pod Feb 20 14:28:45.004: INFO: Number of nodes with available pods: 2 Feb 20 14:28:45.004: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6457, will wait for the garbage collector to delete the pods Feb 20 14:28:45.078: INFO: Deleting DaemonSet.extensions daemon-set took: 14.288209ms Feb 20 14:28:45.379: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273593ms Feb 20 14:28:58.797: INFO: Number of nodes with available pods: 0 Feb 20 14:28:58.797: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 14:28:58.805: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6457/daemonsets","resourceVersion":"25085845"},"items":null} Feb 20 14:28:58.809: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6457/pods","resourceVersion":"25085845"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:28:58.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6457" for this suite. Feb 20 14:29:04.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:29:05.043: INFO: namespace daemonsets-6457 deletion completed in 6.149546618s • [SLOW TEST:40.413 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:29:05.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 20 14:29:05.213: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7640,SelfLink:/api/v1/namespaces/watch-7640/configmaps/e2e-watch-test-resource-version,UID:761f3abb-9221-4e4a-a538-41de27af9a69,ResourceVersion:25085886,Generation:0,CreationTimestamp:2020-02-20 14:29:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 14:29:05.213: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7640,SelfLink:/api/v1/namespaces/watch-7640/configmaps/e2e-watch-test-resource-version,UID:761f3abb-9221-4e4a-a538-41de27af9a69,ResourceVersion:25085887,Generation:0,CreationTimestamp:2020-02-20 14:29:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:29:05.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7640" for this suite. Feb 20 14:29:11.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:29:11.407: INFO: namespace watch-7640 deletion completed in 6.108245267s • [SLOW TEST:6.364 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:29:11.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cf062855-00bb-4fbc-85eb-adc50169c769 STEP: Creating a pod to test consume configMaps Feb 20 14:29:11.539: INFO: Waiting up to 5m0s for pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075" in namespace "configmap-5084" to be "success or failure" Feb 20 14:29:11.544: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Pending", Reason="", readiness=false. Elapsed: 5.052532ms Feb 20 14:29:13.552: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013392398s Feb 20 14:29:15.560: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020839042s Feb 20 14:29:17.568: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028967074s Feb 20 14:29:19.785: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245643997s Feb 20 14:29:21.800: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.260973725s STEP: Saw pod success Feb 20 14:29:21.800: INFO: Pod "pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075" satisfied condition "success or failure" Feb 20 14:29:21.811: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075 container configmap-volume-test: STEP: delete the pod Feb 20 14:29:21.924: INFO: Waiting for pod pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075 to disappear Feb 20 14:29:22.020: INFO: Pod pod-configmaps-5abcb1bd-bdc0-4af1-b4d9-e2653e798075 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:29:22.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5084" for this suite. Feb 20 14:29:28.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:29:28.176: INFO: namespace configmap-5084 deletion completed in 6.136109151s • [SLOW TEST:16.768 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:29:28.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-f05cea3c-a0d0-4f90-bd59-b20790556fbe in namespace container-probe-3546 Feb 20 14:29:36.387: INFO: Started pod liveness-f05cea3c-a0d0-4f90-bd59-b20790556fbe in namespace container-probe-3546 STEP: checking the pod's current state and verifying that restartCount is present Feb 20 14:29:36.393: INFO: Initial restart count of pod liveness-f05cea3c-a0d0-4f90-bd59-b20790556fbe is 0 Feb 20 14:30:00.525: INFO: Restart count of pod container-probe-3546/liveness-f05cea3c-a0d0-4f90-bd59-b20790556fbe is now 1 (24.132546485s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:30:00.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3546" for this suite. Feb 20 14:30:06.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:30:06.866: INFO: namespace container-probe-3546 deletion completed in 6.277929554s • [SLOW TEST:38.689 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:30:06.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6963ffbe-ed5d-4b5e-b081-0e147ea9857f STEP: Creating a pod to test consume secrets Feb 20 14:30:07.056: INFO: Waiting up to 5m0s for pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f" in namespace "secrets-7318" to be "success or failure" Feb 20 14:30:07.070: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.431463ms Feb 20 14:30:09.087: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03065872s Feb 20 14:30:11.095: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03851096s Feb 20 14:30:13.100: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043908356s Feb 20 14:30:15.147: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090000964s STEP: Saw pod success Feb 20 14:30:15.147: INFO: Pod "pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f" satisfied condition "success or failure" Feb 20 14:30:15.153: INFO: Trying to get logs from node iruya-node pod pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f container secret-volume-test: STEP: delete the pod Feb 20 14:30:15.226: INFO: Waiting for pod pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f to disappear Feb 20 14:30:15.230: INFO: Pod pod-secrets-91d1dde8-10f5-4968-aeab-c760deee172f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:30:15.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7318" for this suite. Feb 20 14:30:21.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:30:21.447: INFO: namespace secrets-7318 deletion completed in 6.209835323s • [SLOW TEST:14.580 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:30:21.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 20 14:30:21.521: INFO: Waiting up to 5m0s for pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905" in namespace "containers-7887" to be "success or failure" Feb 20 14:30:21.526: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823037ms Feb 20 14:30:23.534: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012650003s Feb 20 14:30:25.555: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033538898s Feb 20 14:30:27.561: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039574831s Feb 20 14:30:29.569: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047703433s STEP: Saw pod success Feb 20 14:30:29.569: INFO: Pod "client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905" satisfied condition "success or failure" Feb 20 14:30:29.573: INFO: Trying to get logs from node iruya-node pod client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905 container test-container: STEP: delete the pod Feb 20 14:30:29.716: INFO: Waiting for pod client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905 to disappear Feb 20 14:30:29.747: INFO: Pod client-containers-0d7f384d-9a63-4c79-a003-6dbe9bfe2905 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:30:29.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7887" for this suite. Feb 20 14:30:35.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:30:35.999: INFO: namespace containers-7887 deletion completed in 6.244370556s • [SLOW TEST:14.551 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:30:35.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 20 14:33:37.411: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:37.458: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:39.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:39.473: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:41.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:41.466: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:43.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:43.470: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:45.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:45.470: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:47.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:47.489: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:49.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:49.468: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:51.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:51.467: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:53.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:53.470: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:55.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:55.471: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:57.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:57.470: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:33:59.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:33:59.467: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:34:01.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:34:01.468: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:34:03.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:34:03.468: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:34:05.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:34:05.470: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 14:34:07.458: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 14:34:07.465: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:34:07.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3811" for this suite. Feb 20 14:34:31.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:34:31.651: INFO: namespace container-lifecycle-hook-3811 deletion completed in 24.178995724s • [SLOW TEST:235.652 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:34:31.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 20 14:34:40.363: INFO: Successfully updated pod "pod-update-ab68320d-e9a7-4fc2-9b44-a268789bdf45" STEP: verifying the updated pod is in kubernetes Feb 20 14:34:40.403: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:34:40.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5656" for this suite. Feb 20 14:35:04.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:35:04.587: INFO: namespace pods-5656 deletion completed in 24.173308074s • [SLOW TEST:32.935 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:35:04.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 20 14:35:04.667: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 20 14:35:04.675: INFO: Waiting for terminating namespaces to be deleted... Feb 20 14:35:04.679: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 20 14:35:04.692: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.692: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 14:35:04.692: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 20 14:35:04.692: INFO: Container weave ready: true, restart count 0 Feb 20 14:35:04.692: INFO: Container weave-npc ready: true, restart count 0 Feb 20 14:35:04.692: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.692: INFO: Container kube-bench ready: false, restart count 0 Feb 20 14:35:04.692: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 20 14:35:04.762: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 20 14:35:04.762: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 14:35:04.762: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container kube-apiserver ready: true, restart count 0 Feb 20 14:35:04.762: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container kube-scheduler ready: true, restart count 15 Feb 20 14:35:04.762: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container coredns ready: true, restart count 0 Feb 20 14:35:04.762: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container etcd ready: true, restart count 0 Feb 20 14:35:04.762: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 20 14:35:04.762: INFO: Container weave ready: true, restart count 0 Feb 20 14:35:04.762: INFO: Container weave-npc ready: true, restart count 0 Feb 20 14:35:04.762: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 20 14:35:04.762: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Feb 20 14:35:04.848: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Feb 20 14:35:04.848: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e.15f522fade5d2ba1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5674/filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e.15f522fc0bb134fa], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e.15f522fcd0108b28], Reason = [Created], Message = [Created container filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e] STEP: Considering event: Type = [Normal], Name = [filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e.15f522fcfff6be04], Reason = [Started], Message = [Started container filler-pod-002d6411-9a35-4324-a9b0-261dc3c7f21e] STEP: Considering event: Type = [Normal], Name = [filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00.15f522fada8836cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5674/filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00.15f522fbf6cc0dcf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00.15f522fcee9e2fcf], Reason = [Created], Message = [Created container filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00] STEP: Considering event: Type = [Normal], Name = [filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00.15f522fd0d797ac7], Reason = [Started], Message = [Started container filler-pod-73534b9e-6b46-4bd0-96ea-392b99bcbc00] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f522fdabb634cb], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 20 14:35:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5674" for this suite. Feb 20 14:35:26.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 14:35:26.289: INFO: namespace sched-pred-5674 deletion completed in 8.135446991s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.702 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 20 14:35:26.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 20 14:35:27.908: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 25.481164ms)
Feb 20 14:35:28.026: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 118.176014ms)
Feb 20 14:35:28.034: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.607938ms)
Feb 20 14:35:28.040: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.423882ms)
Feb 20 14:35:28.047: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.423635ms)
Feb 20 14:35:28.053: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.387492ms)
Feb 20 14:35:28.059: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.52337ms)
Feb 20 14:35:28.064: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.858422ms)
Feb 20 14:35:28.068: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.902357ms)
Feb 20 14:35:28.075: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.001994ms)
Feb 20 14:35:28.081: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.14828ms)
Feb 20 14:35:28.088: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.874972ms)
Feb 20 14:35:28.095: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.721672ms)
Feb 20 14:35:28.099: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.025861ms)
Feb 20 14:35:28.103: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.53104ms)
Feb 20 14:35:28.106: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.266728ms)
Feb 20 14:35:28.110: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.813371ms)
Feb 20 14:35:28.114: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.658038ms)
Feb 20 14:35:28.121: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.089291ms)
Feb 20 14:35:28.128: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.5708ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:35:28.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1968" for this suite.
Feb 20 14:35:34.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:35:34.270: INFO: namespace proxy-1968 deletion completed in 6.137412334s

• [SLOW TEST:7.980 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:35:34.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 20 14:35:34.400: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:35:46.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9250" for this suite.
Feb 20 14:35:52.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:35:52.791: INFO: namespace init-container-9250 deletion completed in 6.171453552s

• [SLOW TEST:18.521 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:35:52.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-3344169a-dfef-480e-81ba-eaca90d0608c
STEP: Creating secret with name secret-projected-all-test-volume-bbb34cd2-4dab-4c06-9d50-627d1d10a0bf
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 20 14:35:52.981: INFO: Waiting up to 5m0s for pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33" in namespace "projected-365" to be "success or failure"
Feb 20 14:35:52.987: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Pending", Reason="", readiness=false. Elapsed: 5.319459ms
Feb 20 14:35:55.009: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027456879s
Feb 20 14:35:57.016: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034792177s
Feb 20 14:35:59.022: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040893243s
Feb 20 14:36:01.034: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052940709s
Feb 20 14:36:03.040: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058957175s
STEP: Saw pod success
Feb 20 14:36:03.040: INFO: Pod "projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33" satisfied condition "success or failure"
Feb 20 14:36:03.043: INFO: Trying to get logs from node iruya-node pod projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33 container projected-all-volume-test: 
STEP: delete the pod
Feb 20 14:36:03.132: INFO: Waiting for pod projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33 to disappear
Feb 20 14:36:03.148: INFO: Pod projected-volume-f234e6eb-088d-4df7-9a94-e123dbef5b33 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:36:03.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-365" for this suite.
Feb 20 14:36:09.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:36:09.312: INFO: namespace projected-365 deletion completed in 6.160088259s

• [SLOW TEST:16.521 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:36:09.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 20 14:36:09.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc" in namespace "projected-4441" to be "success or failure"
Feb 20 14:36:09.486: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.302953ms
Feb 20 14:36:11.495: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020146576s
Feb 20 14:36:13.504: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02884022s
Feb 20 14:36:15.517: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042186947s
Feb 20 14:36:17.528: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052955471s
STEP: Saw pod success
Feb 20 14:36:17.528: INFO: Pod "downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc" satisfied condition "success or failure"
Feb 20 14:36:17.531: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc container client-container: 
STEP: delete the pod
Feb 20 14:36:17.753: INFO: Waiting for pod downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc to disappear
Feb 20 14:36:17.775: INFO: Pod downwardapi-volume-c06b1022-35d7-483c-830c-6a08de7aa0bc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:36:17.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4441" for this suite.
Feb 20 14:36:23.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:36:23.923: INFO: namespace projected-4441 deletion completed in 6.130993415s

• [SLOW TEST:14.611 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:36:23.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 20 14:36:24.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086799,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 20 14:36:24.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086799,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 20 14:36:34.113: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086814,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 20 14:36:34.113: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086814,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 20 14:36:44.123: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086828,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 20 14:36:44.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086828,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 20 14:36:54.134: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086842,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 20 14:36:54.135: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-a,UID:d4eed4a3-f775-4ac8-b850-ba5a11dba22c,ResourceVersion:25086842,Generation:0,CreationTimestamp:2020-02-20 14:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 20 14:37:04.150: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-b,UID:70479efc-b50e-4cf2-9510-7ef24e41e75a,ResourceVersion:25086856,Generation:0,CreationTimestamp:2020-02-20 14:37:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 20 14:37:04.151: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-b,UID:70479efc-b50e-4cf2-9510-7ef24e41e75a,ResourceVersion:25086856,Generation:0,CreationTimestamp:2020-02-20 14:37:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 20 14:37:14.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-b,UID:70479efc-b50e-4cf2-9510-7ef24e41e75a,ResourceVersion:25086871,Generation:0,CreationTimestamp:2020-02-20 14:37:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 20 14:37:14.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7502,SelfLink:/api/v1/namespaces/watch-7502/configmaps/e2e-watch-test-configmap-b,UID:70479efc-b50e-4cf2-9510-7ef24e41e75a,ResourceVersion:25086871,Generation:0,CreationTimestamp:2020-02-20 14:37:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:37:24.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7502" for this suite.
Feb 20 14:37:30.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:37:30.363: INFO: namespace watch-7502 deletion completed in 6.194340139s

• [SLOW TEST:66.439 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:37:30.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:37:38.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2801" for this suite.
Feb 20 14:38:20.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:38:20.826: INFO: namespace kubelet-test-2801 deletion completed in 42.132875525s

• [SLOW TEST:50.461 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:38:20.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-53a73792-50c5-4fe1-b81e-324bfbc1ff73
STEP: Creating a pod to test consume configMaps
Feb 20 14:38:20.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c" in namespace "configmap-2970" to be "success or failure"
Feb 20 14:38:20.966: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.842183ms
Feb 20 14:38:22.986: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03255247s
Feb 20 14:38:24.998: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044697942s
Feb 20 14:38:27.006: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052673422s
Feb 20 14:38:29.013: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059591434s
Feb 20 14:38:31.023: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069360829s
STEP: Saw pod success
Feb 20 14:38:31.023: INFO: Pod "pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c" satisfied condition "success or failure"
Feb 20 14:38:31.027: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c container configmap-volume-test: 
STEP: delete the pod
Feb 20 14:38:31.086: INFO: Waiting for pod pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c to disappear
Feb 20 14:38:31.095: INFO: Pod pod-configmaps-bf509aff-9470-49f3-9e18-cd8951cb806c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:38:31.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2970" for this suite.
Feb 20 14:38:37.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:38:37.230: INFO: namespace configmap-2970 deletion completed in 6.130171247s

• [SLOW TEST:16.404 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:38:37.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 20 14:38:37.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8137'
Feb 20 14:38:40.202: INFO: stderr: ""
Feb 20 14:38:40.202: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 20 14:38:41.214: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:41.214: INFO: Found 0 / 1
Feb 20 14:38:42.220: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:42.220: INFO: Found 0 / 1
Feb 20 14:38:43.211: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:43.211: INFO: Found 0 / 1
Feb 20 14:38:44.210: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:44.210: INFO: Found 0 / 1
Feb 20 14:38:45.214: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:45.214: INFO: Found 0 / 1
Feb 20 14:38:46.211: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:46.211: INFO: Found 0 / 1
Feb 20 14:38:47.216: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:47.216: INFO: Found 0 / 1
Feb 20 14:38:48.214: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:48.214: INFO: Found 1 / 1
Feb 20 14:38:48.214: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 20 14:38:48.222: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:48.222: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 20 14:38:48.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-m9zvv --namespace=kubectl-8137 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 20 14:38:48.386: INFO: stderr: ""
Feb 20 14:38:48.386: INFO: stdout: "pod/redis-master-m9zvv patched\n"
STEP: checking annotations
Feb 20 14:38:48.401: INFO: Selector matched 1 pods for map[app:redis]
Feb 20 14:38:48.401: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:38:48.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8137" for this suite.
Feb 20 14:39:10.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:39:10.572: INFO: namespace kubectl-8137 deletion completed in 22.162322675s

• [SLOW TEST:33.342 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:39:10.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 20 14:39:10.701: INFO: Waiting up to 5m0s for pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279" in namespace "emptydir-6374" to be "success or failure"
Feb 20 14:39:10.707: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Pending", Reason="", readiness=false. Elapsed: 5.837701ms
Feb 20 14:39:12.719: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017834362s
Feb 20 14:39:14.723: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021731762s
Feb 20 14:39:16.730: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029024266s
Feb 20 14:39:18.737: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036378563s
Feb 20 14:39:20.747: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046205002s
STEP: Saw pod success
Feb 20 14:39:20.747: INFO: Pod "pod-b0792c78-bad9-4fbc-bd77-5575262eb279" satisfied condition "success or failure"
Feb 20 14:39:20.752: INFO: Trying to get logs from node iruya-node pod pod-b0792c78-bad9-4fbc-bd77-5575262eb279 container test-container: 
STEP: delete the pod
Feb 20 14:39:20.827: INFO: Waiting for pod pod-b0792c78-bad9-4fbc-bd77-5575262eb279 to disappear
Feb 20 14:39:20.886: INFO: Pod pod-b0792c78-bad9-4fbc-bd77-5575262eb279 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:39:20.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6374" for this suite.
Feb 20 14:39:26.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:39:27.093: INFO: namespace emptydir-6374 deletion completed in 6.194892562s

• [SLOW TEST:16.520 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:39:27.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-09affc5c-df35-4f56-a38f-c9fcef2bb3bf
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:39:27.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9788" for this suite.
Feb 20 14:39:33.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:39:33.384: INFO: namespace configmap-9788 deletion completed in 6.225983525s

• [SLOW TEST:6.291 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:39:33.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9083
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 20 14:39:33.448: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 20 14:40:03.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9083 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 14:40:03.656: INFO: >>> kubeConfig: /root/.kube/config
I0220 14:40:03.743848       8 log.go:172] (0xc001be0b00) (0xc001f34b40) Create stream
I0220 14:40:03.743913       8 log.go:172] (0xc001be0b00) (0xc001f34b40) Stream added, broadcasting: 1
I0220 14:40:03.754261       8 log.go:172] (0xc001be0b00) Reply frame received for 1
I0220 14:40:03.754301       8 log.go:172] (0xc001be0b00) (0xc001f34be0) Create stream
I0220 14:40:03.754310       8 log.go:172] (0xc001be0b00) (0xc001f34be0) Stream added, broadcasting: 3
I0220 14:40:03.757159       8 log.go:172] (0xc001be0b00) Reply frame received for 3
I0220 14:40:03.757205       8 log.go:172] (0xc001be0b00) (0xc001230140) Create stream
I0220 14:40:03.757218       8 log.go:172] (0xc001be0b00) (0xc001230140) Stream added, broadcasting: 5
I0220 14:40:03.760013       8 log.go:172] (0xc001be0b00) Reply frame received for 5
I0220 14:40:04.056080       8 log.go:172] (0xc001be0b00) Data frame received for 3
I0220 14:40:04.056140       8 log.go:172] (0xc001f34be0) (3) Data frame handling
I0220 14:40:04.056153       8 log.go:172] (0xc001f34be0) (3) Data frame sent
I0220 14:40:04.282186       8 log.go:172] (0xc001be0b00) (0xc001f34be0) Stream removed, broadcasting: 3
I0220 14:40:04.282318       8 log.go:172] (0xc001be0b00) (0xc001230140) Stream removed, broadcasting: 5
I0220 14:40:04.282451       8 log.go:172] (0xc001be0b00) Data frame received for 1
I0220 14:40:04.282537       8 log.go:172] (0xc001f34b40) (1) Data frame handling
I0220 14:40:04.282583       8 log.go:172] (0xc001f34b40) (1) Data frame sent
I0220 14:40:04.282599       8 log.go:172] (0xc001be0b00) (0xc001f34b40) Stream removed, broadcasting: 1
I0220 14:40:04.282610       8 log.go:172] (0xc001be0b00) Go away received
I0220 14:40:04.282874       8 log.go:172] (0xc001be0b00) (0xc001f34b40) Stream removed, broadcasting: 1
I0220 14:40:04.282897       8 log.go:172] (0xc001be0b00) (0xc001f34be0) Stream removed, broadcasting: 3
I0220 14:40:04.282903       8 log.go:172] (0xc001be0b00) (0xc001230140) Stream removed, broadcasting: 5
Feb 20 14:40:04.282: INFO: Found all expected endpoints: [netserver-0]
Feb 20 14:40:04.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9083 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 14:40:04.291: INFO: >>> kubeConfig: /root/.kube/config
I0220 14:40:04.370778       8 log.go:172] (0xc0012f5340) (0xc001230780) Create stream
I0220 14:40:04.370865       8 log.go:172] (0xc0012f5340) (0xc001230780) Stream added, broadcasting: 1
I0220 14:40:04.382169       8 log.go:172] (0xc0012f5340) Reply frame received for 1
I0220 14:40:04.382220       8 log.go:172] (0xc0012f5340) (0xc00227e6e0) Create stream
I0220 14:40:04.382241       8 log.go:172] (0xc0012f5340) (0xc00227e6e0) Stream added, broadcasting: 3
I0220 14:40:04.383719       8 log.go:172] (0xc0012f5340) Reply frame received for 3
I0220 14:40:04.383734       8 log.go:172] (0xc0012f5340) (0xc001230aa0) Create stream
I0220 14:40:04.383738       8 log.go:172] (0xc0012f5340) (0xc001230aa0) Stream added, broadcasting: 5
I0220 14:40:04.386468       8 log.go:172] (0xc0012f5340) Reply frame received for 5
I0220 14:40:04.539281       8 log.go:172] (0xc0012f5340) Data frame received for 3
I0220 14:40:04.539388       8 log.go:172] (0xc00227e6e0) (3) Data frame handling
I0220 14:40:04.539418       8 log.go:172] (0xc00227e6e0) (3) Data frame sent
I0220 14:40:04.765378       8 log.go:172] (0xc0012f5340) Data frame received for 1
I0220 14:40:04.765607       8 log.go:172] (0xc001230780) (1) Data frame handling
I0220 14:40:04.765647       8 log.go:172] (0xc001230780) (1) Data frame sent
I0220 14:40:04.766412       8 log.go:172] (0xc0012f5340) (0xc001230780) Stream removed, broadcasting: 1
I0220 14:40:04.767512       8 log.go:172] (0xc0012f5340) (0xc00227e6e0) Stream removed, broadcasting: 3
I0220 14:40:04.768753       8 log.go:172] (0xc0012f5340) (0xc001230aa0) Stream removed, broadcasting: 5
I0220 14:40:04.768853       8 log.go:172] (0xc0012f5340) (0xc001230780) Stream removed, broadcasting: 1
I0220 14:40:04.768903       8 log.go:172] (0xc0012f5340) (0xc00227e6e0) Stream removed, broadcasting: 3
I0220 14:40:04.768935       8 log.go:172] (0xc0012f5340) (0xc001230aa0) Stream removed, broadcasting: 5
I0220 14:40:04.769039       8 log.go:172] (0xc0012f5340) Go away received
Feb 20 14:40:04.769: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:40:04.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9083" for this suite.
Feb 20 14:40:28.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:40:28.922: INFO: namespace pod-network-test-9083 deletion completed in 24.139192994s

• [SLOW TEST:55.537 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:40:28.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4279
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 20 14:40:29.013: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 20 14:41:09.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4279 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 14:41:09.219: INFO: >>> kubeConfig: /root/.kube/config
I0220 14:41:09.293845       8 log.go:172] (0xc00157e210) (0xc002a363c0) Create stream
I0220 14:41:09.293893       8 log.go:172] (0xc00157e210) (0xc002a363c0) Stream added, broadcasting: 1
I0220 14:41:09.299427       8 log.go:172] (0xc00157e210) Reply frame received for 1
I0220 14:41:09.299491       8 log.go:172] (0xc00157e210) (0xc002a36460) Create stream
I0220 14:41:09.299502       8 log.go:172] (0xc00157e210) (0xc002a36460) Stream added, broadcasting: 3
I0220 14:41:09.302458       8 log.go:172] (0xc00157e210) Reply frame received for 3
I0220 14:41:09.302519       8 log.go:172] (0xc00157e210) (0xc00227e1e0) Create stream
I0220 14:41:09.302533       8 log.go:172] (0xc00157e210) (0xc00227e1e0) Stream added, broadcasting: 5
I0220 14:41:09.304516       8 log.go:172] (0xc00157e210) Reply frame received for 5
I0220 14:41:09.488701       8 log.go:172] (0xc00157e210) Data frame received for 3
I0220 14:41:09.488741       8 log.go:172] (0xc002a36460) (3) Data frame handling
I0220 14:41:09.488763       8 log.go:172] (0xc002a36460) (3) Data frame sent
I0220 14:41:09.634497       8 log.go:172] (0xc00157e210) (0xc002a36460) Stream removed, broadcasting: 3
I0220 14:41:09.634706       8 log.go:172] (0xc00157e210) Data frame received for 1
I0220 14:41:09.634745       8 log.go:172] (0xc00157e210) (0xc00227e1e0) Stream removed, broadcasting: 5
I0220 14:41:09.634802       8 log.go:172] (0xc002a363c0) (1) Data frame handling
I0220 14:41:09.634868       8 log.go:172] (0xc002a363c0) (1) Data frame sent
I0220 14:41:09.634903       8 log.go:172] (0xc00157e210) (0xc002a363c0) Stream removed, broadcasting: 1
I0220 14:41:09.634941       8 log.go:172] (0xc00157e210) Go away received
I0220 14:41:09.635072       8 log.go:172] (0xc00157e210) (0xc002a363c0) Stream removed, broadcasting: 1
I0220 14:41:09.635093       8 log.go:172] (0xc00157e210) (0xc002a36460) Stream removed, broadcasting: 3
I0220 14:41:09.635101       8 log.go:172] (0xc00157e210) (0xc00227e1e0) Stream removed, broadcasting: 5
Feb 20 14:41:09.635: INFO: Waiting for endpoints: map[]
Feb 20 14:41:09.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4279 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 14:41:09.652: INFO: >>> kubeConfig: /root/.kube/config
I0220 14:41:09.715408       8 log.go:172] (0xc0003c0580) (0xc00330a3c0) Create stream
I0220 14:41:09.715456       8 log.go:172] (0xc0003c0580) (0xc00330a3c0) Stream added, broadcasting: 1
I0220 14:41:09.723261       8 log.go:172] (0xc0003c0580) Reply frame received for 1
I0220 14:41:09.723297       8 log.go:172] (0xc0003c0580) (0xc002a36500) Create stream
I0220 14:41:09.723316       8 log.go:172] (0xc0003c0580) (0xc002a36500) Stream added, broadcasting: 3
I0220 14:41:09.726757       8 log.go:172] (0xc0003c0580) Reply frame received for 3
I0220 14:41:09.726796       8 log.go:172] (0xc0003c0580) (0xc00227e280) Create stream
I0220 14:41:09.726809       8 log.go:172] (0xc0003c0580) (0xc00227e280) Stream added, broadcasting: 5
I0220 14:41:09.728535       8 log.go:172] (0xc0003c0580) Reply frame received for 5
I0220 14:41:09.885572       8 log.go:172] (0xc0003c0580) Data frame received for 3
I0220 14:41:09.885602       8 log.go:172] (0xc002a36500) (3) Data frame handling
I0220 14:41:09.885617       8 log.go:172] (0xc002a36500) (3) Data frame sent
I0220 14:41:10.020523       8 log.go:172] (0xc0003c0580) (0xc002a36500) Stream removed, broadcasting: 3
I0220 14:41:10.020655       8 log.go:172] (0xc0003c0580) Data frame received for 1
I0220 14:41:10.020696       8 log.go:172] (0xc00330a3c0) (1) Data frame handling
I0220 14:41:10.020710       8 log.go:172] (0xc00330a3c0) (1) Data frame sent
I0220 14:41:10.020725       8 log.go:172] (0xc0003c0580) (0xc00330a3c0) Stream removed, broadcasting: 1
I0220 14:41:10.021641       8 log.go:172] (0xc0003c0580) (0xc00227e280) Stream removed, broadcasting: 5
I0220 14:41:10.021677       8 log.go:172] (0xc0003c0580) (0xc00330a3c0) Stream removed, broadcasting: 1
I0220 14:41:10.021683       8 log.go:172] (0xc0003c0580) (0xc002a36500) Stream removed, broadcasting: 3
I0220 14:41:10.021688       8 log.go:172] (0xc0003c0580) (0xc00227e280) Stream removed, broadcasting: 5
I0220 14:41:10.021788       8 log.go:172] (0xc0003c0580) Go away received
Feb 20 14:41:10.022: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:41:10.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4279" for this suite.
Feb 20 14:41:34.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:41:34.183: INFO: namespace pod-network-test-4279 deletion completed in 24.149844861s

• [SLOW TEST:65.261 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:41:34.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 20 14:41:34.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51" in namespace "projected-2360" to be "success or failure"
Feb 20 14:41:34.341: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51": Phase="Pending", Reason="", readiness=false. Elapsed: 26.699002ms
Feb 20 14:41:36.350: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035041188s
Feb 20 14:41:38.357: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042879251s
Feb 20 14:41:40.371: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056463427s
Feb 20 14:41:42.384: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069211349s
STEP: Saw pod success
Feb 20 14:41:42.384: INFO: Pod "downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51" satisfied condition "success or failure"
Feb 20 14:41:42.387: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51 container client-container: 
STEP: delete the pod
Feb 20 14:41:42.465: INFO: Waiting for pod downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51 to disappear
Feb 20 14:41:42.546: INFO: Pod downwardapi-volume-e22433d4-38da-44a9-ab16-f6fc0fd08f51 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:41:42.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2360" for this suite.
Feb 20 14:41:48.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:41:48.718: INFO: namespace projected-2360 deletion completed in 6.164001852s

• [SLOW TEST:14.535 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:41:48.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 20 14:41:48.817: INFO: Waiting up to 5m0s for pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b" in namespace "emptydir-3567" to be "success or failure"
Feb 20 14:41:48.836: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.355782ms
Feb 20 14:41:51.479: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662289362s
Feb 20 14:41:53.486: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669216614s
Feb 20 14:41:55.503: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685888444s
Feb 20 14:41:57.511: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.69448517s
STEP: Saw pod success
Feb 20 14:41:57.511: INFO: Pod "pod-85a0c418-2df5-4efd-9b81-918d59402f9b" satisfied condition "success or failure"
Feb 20 14:41:57.516: INFO: Trying to get logs from node iruya-node pod pod-85a0c418-2df5-4efd-9b81-918d59402f9b container test-container: 
STEP: delete the pod
Feb 20 14:41:57.674: INFO: Waiting for pod pod-85a0c418-2df5-4efd-9b81-918d59402f9b to disappear
Feb 20 14:41:57.686: INFO: Pod pod-85a0c418-2df5-4efd-9b81-918d59402f9b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:41:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3567" for this suite.
Feb 20 14:42:03.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:42:03.901: INFO: namespace emptydir-3567 deletion completed in 6.208677481s

• [SLOW TEST:15.183 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:42:03.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 20 14:42:04.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f" in namespace "downward-api-5508" to be "success or failure"
Feb 20 14:42:04.176: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Pending", Reason="", readiness=false. Elapsed: 120.119154ms
Feb 20 14:42:06.184: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128153926s
Feb 20 14:42:08.195: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138712804s
Feb 20 14:42:10.205: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149172051s
Feb 20 14:42:12.220: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164560889s
Feb 20 14:42:14.226: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169954458s
STEP: Saw pod success
Feb 20 14:42:14.226: INFO: Pod "downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f" satisfied condition "success or failure"
Feb 20 14:42:14.229: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f container client-container: 
STEP: delete the pod
Feb 20 14:42:14.300: INFO: Waiting for pod downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f to disappear
Feb 20 14:42:14.311: INFO: Pod downwardapi-volume-72d65b23-7a75-4de0-ad81-d419c97e219f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:42:14.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5508" for this suite.
Feb 20 14:42:20.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:42:20.528: INFO: namespace downward-api-5508 deletion completed in 6.209833148s

• [SLOW TEST:16.626 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:42:20.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 20 14:42:20.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245" in namespace "downward-api-926" to be "success or failure"
Feb 20 14:42:20.675: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Pending", Reason="", readiness=false. Elapsed: 13.9299ms
Feb 20 14:42:23.182: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521025556s
Feb 20 14:42:25.188: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.527225399s
Feb 20 14:42:27.196: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535201216s
Feb 20 14:42:29.203: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542628491s
Feb 20 14:42:31.211: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.55028421s
STEP: Saw pod success
Feb 20 14:42:31.211: INFO: Pod "downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245" satisfied condition "success or failure"
Feb 20 14:42:31.215: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245 container client-container: 
STEP: delete the pod
Feb 20 14:42:31.262: INFO: Waiting for pod downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245 to disappear
Feb 20 14:42:31.272: INFO: Pod downwardapi-volume-880d4939-41a2-4b15-9723-f029306f0245 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:42:31.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-926" for this suite.
Feb 20 14:42:37.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:42:37.507: INFO: namespace downward-api-926 deletion completed in 6.228488266s

• [SLOW TEST:16.979 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:42:37.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0220 14:42:52.807281       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 14:42:52.807: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:42:52.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1246" for this suite.
Feb 20 14:43:09.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:43:09.838: INFO: namespace gc-1246 deletion completed in 17.010457476s

• [SLOW TEST:32.330 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:43:09.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 20 14:43:10.020: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 20 14:43:15.029: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:43:16.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1591" for this suite.
Feb 20 14:43:22.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:43:22.341: INFO: namespace replication-controller-1591 deletion completed in 6.272097798s

• [SLOW TEST:12.502 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:43:22.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 20 14:43:22.524: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.348688ms)
Feb 20 14:43:22.531: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.570555ms)
Feb 20 14:43:22.535: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.726656ms)
Feb 20 14:43:22.540: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.328585ms)
Feb 20 14:43:22.544: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.37527ms)
Feb 20 14:43:22.548: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.032131ms)
Feb 20 14:43:22.586: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.863282ms)
Feb 20 14:43:22.604: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.416002ms)
Feb 20 14:43:22.663: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.039787ms)
Feb 20 14:43:22.677: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.193869ms)
Feb 20 14:43:22.693: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.789462ms)
Feb 20 14:43:22.702: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.433342ms)
Feb 20 14:43:22.714: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.736662ms)
Feb 20 14:43:22.722: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.114023ms)
Feb 20 14:43:22.730: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.093759ms)
Feb 20 14:43:22.780: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.183306ms)
Feb 20 14:43:22.799: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.700767ms)
Feb 20 14:43:22.813: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.678709ms)
Feb 20 14:43:22.823: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.037936ms)
Feb 20 14:43:22.882: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 58.2213ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:43:22.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5831" for this suite.
Feb 20 14:43:28.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:43:29.009: INFO: namespace proxy-5831 deletion completed in 6.118322601s

• [SLOW TEST:6.668 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:43:29.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 20 14:43:29.092: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 20 14:43:33.505: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:43:33.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5189" for this suite.
Feb 20 14:43:45.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:43:45.846: INFO: namespace replication-controller-5189 deletion completed in 12.274608502s

• [SLOW TEST:16.837 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:43:45.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 14:43:58.015: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.023: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.028: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.032: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.037: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.044: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.050: INFO: Unable to read jessie_udp@PodARecord from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.057: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad: the server could not find the requested resource (get pods dns-test-994ca528-690f-441d-babc-60295e6a51ad)
Feb 20 14:43:58.057: INFO: Lookups using dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 20 14:44:03.108: INFO: DNS probes using dns-9499/dns-test-994ca528-690f-441d-babc-60295e6a51ad succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:44:03.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9499" for this suite.
Feb 20 14:44:09.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:44:09.524: INFO: namespace dns-9499 deletion completed in 6.224762179s

• [SLOW TEST:23.678 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:44:09.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0220 14:44:19.697084       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 14:44:19.697: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:44:19.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4065" for this suite.
Feb 20 14:44:25.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:44:25.929: INFO: namespace gc-4065 deletion completed in 6.227564654s

• [SLOW TEST:16.405 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:44:25.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e6d88ee8-4977-446d-9fbf-ff656a23dfa7
STEP: Creating a pod to test consume configMaps
Feb 20 14:44:26.196: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1" in namespace "projected-5274" to be "success or failure"
Feb 20 14:44:26.206: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632925ms
Feb 20 14:44:28.212: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016634905s
Feb 20 14:44:30.219: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023778534s
Feb 20 14:44:32.227: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030892658s
Feb 20 14:44:34.239: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043728759s
Feb 20 14:44:36.247: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051276247s
STEP: Saw pod success
Feb 20 14:44:36.247: INFO: Pod "pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1" satisfied condition "success or failure"
Feb 20 14:44:36.251: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 14:44:36.302: INFO: Waiting for pod pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1 to disappear
Feb 20 14:44:36.342: INFO: Pod pod-projected-configmaps-4266df3a-2840-49fa-a5a9-21bb4a8aabd1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:44:36.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5274" for this suite.
Feb 20 14:44:42.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:44:42.476: INFO: namespace projected-5274 deletion completed in 6.125279322s

• [SLOW TEST:16.547 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:44:42.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 20 14:44:42.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3579'
Feb 20 14:44:42.675: INFO: stderr: ""
Feb 20 14:44:42.675: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 20 14:44:52.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3579 -o json'
Feb 20 14:44:52.890: INFO: stderr: ""
Feb 20 14:44:52.890: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-20T14:44:42Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3579\",\n        \"resourceVersion\": \"25088161\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3579/pods/e2e-test-nginx-pod\",\n        \"uid\": \"16fa0e74-5715-41df-80fe-a9b79b86b457\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-5xx7r\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-5xx7r\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-5xx7r\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-20T14:44:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-20T14:44:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-20T14:44:50Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-20T14:44:42Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://956baf5420e6ad86a46d701f7869b68eb9aff0ffb2a29ec5a5bea41dbe0cecd6\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-20T14:44:49Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-20T14:44:42Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 20 14:44:52.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3579'
Feb 20 14:44:53.452: INFO: stderr: ""
Feb 20 14:44:53.452: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 20 14:44:53.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3579'
Feb 20 14:44:59.792: INFO: stderr: ""
Feb 20 14:44:59.792: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:44:59.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3579" for this suite.
Feb 20 14:45:05.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:45:05.956: INFO: namespace kubectl-3579 deletion completed in 6.144349775s

• [SLOW TEST:23.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:45:05.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:46:06.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9707" for this suite.
Feb 20 14:46:28.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:46:28.284: INFO: namespace container-probe-9707 deletion completed in 22.201126434s

• [SLOW TEST:82.328 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:46:28.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 20 14:46:44.668: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 14:46:44.683: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 14:46:46.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 14:46:46.692: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 14:46:48.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 14:46:48.692: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 14:46:50.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 14:46:50.698: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 14:46:52.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 14:46:52.692: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:46:52.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5279" for this suite.
Feb 20 14:47:14.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:47:14.945: INFO: namespace container-lifecycle-hook-5279 deletion completed in 22.193533319s

• [SLOW TEST:46.661 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:47:14.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:47:27.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9684" for this suite.
Feb 20 14:47:33.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:47:33.289: INFO: namespace kubelet-test-9684 deletion completed in 6.108546352s

• [SLOW TEST:18.343 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:47:33.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2475/configmap-test-2a955443-b4c4-4b1e-a688-75d081faac71
STEP: Creating a pod to test consume configMaps
Feb 20 14:47:33.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267" in namespace "configmap-2475" to be "success or failure"
Feb 20 14:47:33.425: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Pending", Reason="", readiness=false. Elapsed: 14.908106ms
Feb 20 14:47:35.431: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021185226s
Feb 20 14:47:37.436: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025966345s
Feb 20 14:47:39.443: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032926522s
Feb 20 14:47:41.450: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03986351s
Feb 20 14:47:43.459: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048784155s
STEP: Saw pod success
Feb 20 14:47:43.459: INFO: Pod "pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267" satisfied condition "success or failure"
Feb 20 14:47:43.464: INFO: Trying to get logs from node iruya-node pod pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267 container env-test: 
STEP: delete the pod
Feb 20 14:47:43.520: INFO: Waiting for pod pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267 to disappear
Feb 20 14:47:43.527: INFO: Pod pod-configmaps-52b8ac4a-c854-462d-90b4-f998adb83267 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:47:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2475" for this suite.
Feb 20 14:47:49.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:47:49.765: INFO: namespace configmap-2475 deletion completed in 6.232715818s

• [SLOW TEST:16.476 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:47:49.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 20 14:47:49.952: INFO: Waiting up to 5m0s for pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27" in namespace "containers-3774" to be "success or failure"
Feb 20 14:47:49.976: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27": Phase="Pending", Reason="", readiness=false. Elapsed: 23.633854ms
Feb 20 14:47:51.983: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031138632s
Feb 20 14:47:54.041: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088737142s
Feb 20 14:47:56.049: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097205612s
Feb 20 14:47:58.093: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.140829455s
STEP: Saw pod success
Feb 20 14:47:58.093: INFO: Pod "client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27" satisfied condition "success or failure"
Feb 20 14:47:58.099: INFO: Trying to get logs from node iruya-node pod client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27 container test-container: 
STEP: delete the pod
Feb 20 14:47:58.176: INFO: Waiting for pod client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27 to disappear
Feb 20 14:47:58.189: INFO: Pod client-containers-f9f3667d-bbd9-49d9-87cf-7fadd02eed27 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:47:58.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3774" for this suite.
Feb 20 14:48:04.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:48:04.376: INFO: namespace containers-3774 deletion completed in 6.150144034s

• [SLOW TEST:14.609 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:48:04.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 20 14:48:04.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088589,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 20 14:48:04.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088590,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 20 14:48:04.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088591,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 20 14:48:14.517: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088607,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 20 14:48:14.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088608,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 20 14:48:14.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4744,SelfLink:/api/v1/namespaces/watch-4744/configmaps/e2e-watch-test-label-changed,UID:9ba74541-e2af-4065-92be-426e1b1b5194,ResourceVersion:25088609,Generation:0,CreationTimestamp:2020-02-20 14:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:48:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4744" for this suite.
Feb 20 14:48:20.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:48:20.649: INFO: namespace watch-4744 deletion completed in 6.123802355s

• [SLOW TEST:16.272 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:48:20.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 20 14:48:20.866: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 20 14:48:20.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:21.408: INFO: stderr: ""
Feb 20 14:48:21.408: INFO: stdout: "service/redis-slave created\n"
Feb 20 14:48:21.408: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 20 14:48:21.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:22.044: INFO: stderr: ""
Feb 20 14:48:22.044: INFO: stdout: "service/redis-master created\n"
Feb 20 14:48:22.045: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 20 14:48:22.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:22.979: INFO: stderr: ""
Feb 20 14:48:22.979: INFO: stdout: "service/frontend created\n"
Feb 20 14:48:22.979: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 20 14:48:22.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:23.376: INFO: stderr: ""
Feb 20 14:48:23.376: INFO: stdout: "deployment.apps/frontend created\n"
Feb 20 14:48:23.377: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 20 14:48:23.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:24.891: INFO: stderr: ""
Feb 20 14:48:24.891: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 20 14:48:24.892: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 20 14:48:24.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2270'
Feb 20 14:48:25.585: INFO: stderr: ""
Feb 20 14:48:25.585: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 20 14:48:25.585: INFO: Waiting for all frontend pods to be Running.
Feb 20 14:48:50.637: INFO: Waiting for frontend to serve content.
Feb 20 14:48:51.209: INFO: Trying to add a new entry to the guestbook.
Feb 20 14:48:51.257: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 20 14:48:51.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:53.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:53.339: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 20 14:48:53.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:53.576: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:53.577: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 20 14:48:53.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:53.758: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:53.759: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 20 14:48:53.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:53.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:53.924: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 20 14:48:53.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:54.092: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:54.092: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 20 14:48:54.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2270'
Feb 20 14:48:54.553: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 14:48:54.553: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:48:54.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2270" for this suite.
Feb 20 14:49:38.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:49:38.760: INFO: namespace kubectl-2270 deletion completed in 44.17787593s

• [SLOW TEST:78.111 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:49:38.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 20 14:49:49.057: INFO: Waiting up to 5m0s for pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165" in namespace "pods-8124" to be "success or failure"
Feb 20 14:49:49.081: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165": Phase="Pending", Reason="", readiness=false. Elapsed: 24.643046ms
Feb 20 14:49:51.089: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032099872s
Feb 20 14:49:53.096: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039605048s
Feb 20 14:49:55.109: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052015503s
Feb 20 14:49:57.114: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057321044s
STEP: Saw pod success
Feb 20 14:49:57.114: INFO: Pod "client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165" satisfied condition "success or failure"
Feb 20 14:49:57.116: INFO: Trying to get logs from node iruya-node pod client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165 container env3cont: 
STEP: delete the pod
Feb 20 14:49:57.150: INFO: Waiting for pod client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165 to disappear
Feb 20 14:49:57.158: INFO: Pod client-envvars-2fdef97e-ce04-4ca8-bc22-5d2c16baf165 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:49:57.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8124" for this suite.
Feb 20 14:50:51.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:50:51.371: INFO: namespace pods-8124 deletion completed in 54.20973936s

• [SLOW TEST:72.611 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:50:51.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b96f9efd-fc3e-45fe-807c-463e4dae2072
STEP: Creating a pod to test consume configMaps
Feb 20 14:50:51.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132" in namespace "configmap-3756" to be "success or failure"
Feb 20 14:50:51.525: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132": Phase="Pending", Reason="", readiness=false. Elapsed: 9.852787ms
Feb 20 14:50:53.538: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022418474s
Feb 20 14:50:55.550: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034551514s
Feb 20 14:50:57.558: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042313407s
Feb 20 14:50:59.566: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051021873s
STEP: Saw pod success
Feb 20 14:50:59.566: INFO: Pod "pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132" satisfied condition "success or failure"
Feb 20 14:50:59.572: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132 container configmap-volume-test: 
STEP: delete the pod
Feb 20 14:50:59.647: INFO: Waiting for pod pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132 to disappear
Feb 20 14:50:59.653: INFO: Pod pod-configmaps-5c768d15-f881-4fe0-a8a0-4b9df497d132 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:50:59.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3756" for this suite.
Feb 20 14:51:07.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:51:07.877: INFO: namespace configmap-3756 deletion completed in 8.21535842s

• [SLOW TEST:16.506 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:51:07.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 20 14:51:08.149: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 20 14:51:08.195: INFO: Number of nodes with available pods: 0
Feb 20 14:51:08.195: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:09.213: INFO: Number of nodes with available pods: 0
Feb 20 14:51:09.213: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:11.425: INFO: Number of nodes with available pods: 0
Feb 20 14:51:11.425: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:12.237: INFO: Number of nodes with available pods: 0
Feb 20 14:51:12.238: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:14.867: INFO: Number of nodes with available pods: 0
Feb 20 14:51:14.867: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:15.720: INFO: Number of nodes with available pods: 0
Feb 20 14:51:15.720: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:16.209: INFO: Number of nodes with available pods: 0
Feb 20 14:51:16.209: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:17.268: INFO: Number of nodes with available pods: 0
Feb 20 14:51:17.268: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:18.266: INFO: Number of nodes with available pods: 1
Feb 20 14:51:18.266: INFO: Node iruya-node is running more than one daemon pod
Feb 20 14:51:19.209: INFO: Number of nodes with available pods: 2
Feb 20 14:51:19.209: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 20 14:51:19.284: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:19.284: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:20.376: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:20.376: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:21.371: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:21.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:22.380: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:22.380: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:23.373: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:23.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:24.375: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:24.375: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:25.374: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:25.374: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:25.374: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:26.379: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:26.379: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:26.379: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:27.370: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:27.370: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:27.370: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:28.371: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:28.371: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:28.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:29.372: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:29.372: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:29.372: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:30.370: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:30.370: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:30.370: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:31.373: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:31.373: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:31.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:32.376: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:32.376: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:32.376: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:33.369: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:33.369: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:33.369: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:34.371: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:34.371: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:34.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:35.374: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:35.374: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:35.374: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:36.377: INFO: Wrong image for pod: daemon-set-7pm4s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:36.377: INFO: Pod daemon-set-7pm4s is not available
Feb 20 14:51:36.377: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:37.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:37.371: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:38.378: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:38.379: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:39.378: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:39.379: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:40.372: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:40.372: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:41.372: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:41.372: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:42.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:42.371: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:43.369: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:43.369: INFO: Pod daemon-set-rcmwd is not available
Feb 20 14:51:44.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:45.559: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:46.371: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:47.378: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:48.374: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:48.374: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:49.372: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:49.372: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:50.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:50.373: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:51.372: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:51.372: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:52.376: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:52.376: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:53.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:53.373: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:54.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:54.373: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:55.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:55.373: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:56.375: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:56.375: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:57.373: INFO: Wrong image for pod: daemon-set-lnrjq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 14:51:57.373: INFO: Pod daemon-set-lnrjq is not available
Feb 20 14:51:58.371: INFO: Pod daemon-set-8dnxs is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 20 14:51:58.382: INFO: Number of nodes with available pods: 1
Feb 20 14:51:58.382: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:51:59.397: INFO: Number of nodes with available pods: 1
Feb 20 14:51:59.397: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:00.400: INFO: Number of nodes with available pods: 1
Feb 20 14:52:00.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:01.391: INFO: Number of nodes with available pods: 1
Feb 20 14:52:01.391: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:03.055: INFO: Number of nodes with available pods: 1
Feb 20 14:52:03.055: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:03.394: INFO: Number of nodes with available pods: 1
Feb 20 14:52:03.394: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:04.391: INFO: Number of nodes with available pods: 1
Feb 20 14:52:04.391: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 20 14:52:05.433: INFO: Number of nodes with available pods: 2
Feb 20 14:52:05.433: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4962, will wait for the garbage collector to delete the pods
Feb 20 14:52:05.525: INFO: Deleting DaemonSet.extensions daemon-set took: 11.723537ms
Feb 20 14:52:05.825: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.30359ms
Feb 20 14:52:17.933: INFO: Number of nodes with available pods: 0
Feb 20 14:52:17.933: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 14:52:17.936: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4962/daemonsets","resourceVersion":"25089280"},"items":null}

Feb 20 14:52:17.939: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4962/pods","resourceVersion":"25089280"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:52:17.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4962" for this suite.
Feb 20 14:52:23.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:52:24.086: INFO: namespace daemonsets-4962 deletion completed in 6.124739554s

• [SLOW TEST:76.208 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:52:24.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1672, will wait for the garbage collector to delete the pods
Feb 20 14:52:34.318: INFO: Deleting Job.batch foo took: 13.515304ms
Feb 20 14:52:34.618: INFO: Terminating Job.batch foo pods took: 300.319109ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:53:16.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1672" for this suite.
Feb 20 14:53:22.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:53:22.932: INFO: namespace job-1672 deletion completed in 6.196640294s

• [SLOW TEST:58.845 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:53:22.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7c701126-0bbc-4954-8700-07f8ff2ae052
STEP: Creating a pod to test consume configMaps
Feb 20 14:53:23.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68" in namespace "configmap-6375" to be "success or failure"
Feb 20 14:53:23.088: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.540198ms
Feb 20 14:53:25.097: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016925984s
Feb 20 14:53:27.107: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026198433s
Feb 20 14:53:29.115: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034874738s
Feb 20 14:53:31.128: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047854115s
STEP: Saw pod success
Feb 20 14:53:31.128: INFO: Pod "pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68" satisfied condition "success or failure"
Feb 20 14:53:31.132: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68 container configmap-volume-test: 
STEP: delete the pod
Feb 20 14:53:31.241: INFO: Waiting for pod pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68 to disappear
Feb 20 14:53:31.253: INFO: Pod pod-configmaps-a28e286f-559c-40f9-86c2-ce74b60e7f68 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:53:31.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6375" for this suite.
Feb 20 14:53:37.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:53:37.428: INFO: namespace configmap-6375 deletion completed in 6.170564715s

• [SLOW TEST:14.496 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:53:37.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-9hfr
STEP: Creating a pod to test atomic-volume-subpath
Feb 20 14:53:37.564: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9hfr" in namespace "subpath-6690" to be "success or failure"
Feb 20 14:53:37.571: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739691ms
Feb 20 14:53:39.581: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016835672s
Feb 20 14:53:41.586: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021273361s
Feb 20 14:53:43.593: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028172642s
Feb 20 14:53:45.600: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035909573s
Feb 20 14:53:47.622: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 10.057065231s
Feb 20 14:53:49.646: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 12.081226536s
Feb 20 14:53:51.653: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 14.088523489s
Feb 20 14:53:53.659: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 16.094867761s
Feb 20 14:53:55.670: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 18.105177901s
Feb 20 14:53:57.675: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 20.110659203s
Feb 20 14:53:59.681: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 22.116323253s
Feb 20 14:54:01.694: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 24.129060646s
Feb 20 14:54:03.703: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 26.138809792s
Feb 20 14:54:05.719: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Running", Reason="", readiness=true. Elapsed: 28.154602969s
Feb 20 14:54:07.773: INFO: Pod "pod-subpath-test-secret-9hfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.208094598s
STEP: Saw pod success
Feb 20 14:54:07.773: INFO: Pod "pod-subpath-test-secret-9hfr" satisfied condition "success or failure"
Feb 20 14:54:07.779: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-9hfr container test-container-subpath-secret-9hfr: 
STEP: delete the pod
Feb 20 14:54:07.876: INFO: Waiting for pod pod-subpath-test-secret-9hfr to disappear
Feb 20 14:54:07.936: INFO: Pod pod-subpath-test-secret-9hfr no longer exists
STEP: Deleting pod pod-subpath-test-secret-9hfr
Feb 20 14:54:07.936: INFO: Deleting pod "pod-subpath-test-secret-9hfr" in namespace "subpath-6690"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:54:07.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6690" for this suite.
Feb 20 14:54:13.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:54:14.108: INFO: namespace subpath-6690 deletion completed in 6.155421182s

• [SLOW TEST:36.680 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:54:14.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 20 14:54:14.289: INFO: PodSpec: initContainers in spec.initContainers
Feb 20 14:55:17.107: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e923537-cfb3-44b0-a7a1-2dd0c9bcde77", GenerateName:"", Namespace:"init-container-144", SelfLink:"/api/v1/namespaces/init-container-144/pods/pod-init-9e923537-cfb3-44b0-a7a1-2dd0c9bcde77", UID:"348635dd-c2be-4707-9add-c8504c2781ed", ResourceVersion:"25089682", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717807254, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"289869852"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-k6p9p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025e0000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k6p9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k6p9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k6p9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000c571f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026c0000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c57280)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c572a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000c572a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000c572ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717807254, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717807254, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717807254, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717807254, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0002518a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ad6a80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ad6af0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://492b5fe8775231fb213f050f0aa7424bedfc0c6d0e843116c15ea6322b1b27a2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00052a240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00052a220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:55:17.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-144" for this suite.
Feb 20 14:55:29.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:55:29.291: INFO: namespace init-container-144 deletion completed in 12.151531309s

• [SLOW TEST:75.182 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:55:29.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-156816f6-f8ff-4914-89f2-a4939af25070
STEP: Creating a pod to test consume configMaps
Feb 20 14:55:29.362: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707" in namespace "projected-2679" to be "success or failure"
Feb 20 14:55:29.426: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707": Phase="Pending", Reason="", readiness=false. Elapsed: 63.66905ms
Feb 20 14:55:31.434: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071168709s
Feb 20 14:55:33.441: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078322491s
Feb 20 14:55:35.450: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087572532s
Feb 20 14:55:37.456: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093519432s
STEP: Saw pod success
Feb 20 14:55:37.456: INFO: Pod "pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707" satisfied condition "success or failure"
Feb 20 14:55:37.459: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 14:55:37.521: INFO: Waiting for pod pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707 to disappear
Feb 20 14:55:37.574: INFO: Pod pod-projected-configmaps-ecea2fc8-a1b4-4cf8-b7b9-49bd17474707 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:55:37.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2679" for this suite.
Feb 20 14:55:43.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:55:43.743: INFO: namespace projected-2679 deletion completed in 6.157904391s

• [SLOW TEST:14.451 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:55:43.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 20 14:55:43.919: INFO: Waiting up to 5m0s for pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe" in namespace "emptydir-4240" to be "success or failure"
Feb 20 14:55:44.032: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe": Phase="Pending", Reason="", readiness=false. Elapsed: 112.687589ms
Feb 20 14:55:46.074: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155140585s
Feb 20 14:55:48.083: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163352491s
Feb 20 14:55:50.095: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175803508s
Feb 20 14:55:52.104: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184605743s
STEP: Saw pod success
Feb 20 14:55:52.104: INFO: Pod "pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe" satisfied condition "success or failure"
Feb 20 14:55:52.112: INFO: Trying to get logs from node iruya-node pod pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe container test-container: 
STEP: delete the pod
Feb 20 14:55:52.282: INFO: Waiting for pod pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe to disappear
Feb 20 14:55:52.311: INFO: Pod pod-140d5f6e-ca52-4dc1-ab14-dbe86c55fbbe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:55:52.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4240" for this suite.
Feb 20 14:55:58.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:55:58.471: INFO: namespace emptydir-4240 deletion completed in 6.150105914s

• [SLOW TEST:14.728 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:55:58.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9322
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9322
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9322
Feb 20 14:55:58.616: INFO: Found 0 stateful pods, waiting for 1
Feb 20 14:56:08.628: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 20 14:56:08.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 14:56:09.322: INFO: stderr: "I0220 14:56:08.878753    2767 log.go:172] (0xc00013edc0) (0xc000632780) Create stream\nI0220 14:56:08.878875    2767 log.go:172] (0xc00013edc0) (0xc000632780) Stream added, broadcasting: 1\nI0220 14:56:08.892091    2767 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0220 14:56:08.892144    2767 log.go:172] (0xc00013edc0) (0xc0007be000) Create stream\nI0220 14:56:08.892159    2767 log.go:172] (0xc00013edc0) (0xc0007be000) Stream added, broadcasting: 3\nI0220 14:56:08.894083    2767 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0220 14:56:08.894137    2767 log.go:172] (0xc00013edc0) (0xc0007e6000) Create stream\nI0220 14:56:08.894161    2767 log.go:172] (0xc00013edc0) (0xc0007e6000) Stream added, broadcasting: 5\nI0220 14:56:08.896781    2767 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0220 14:56:09.071992    2767 log.go:172] (0xc00013edc0) Data frame received for 5\nI0220 14:56:09.072067    2767 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0220 14:56:09.072112    2767 log.go:172] (0xc0007e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 14:56:09.160045    2767 log.go:172] (0xc00013edc0) Data frame received for 3\nI0220 14:56:09.160089    2767 log.go:172] (0xc0007be000) (3) Data frame handling\nI0220 14:56:09.160107    2767 log.go:172] (0xc0007be000) (3) Data frame sent\nI0220 14:56:09.315955    2767 log.go:172] (0xc00013edc0) Data frame received for 1\nI0220 14:56:09.316040    2767 log.go:172] (0xc00013edc0) (0xc0007be000) Stream removed, broadcasting: 3\nI0220 14:56:09.316072    2767 log.go:172] (0xc000632780) (1) Data frame handling\nI0220 14:56:09.316113    2767 log.go:172] (0xc000632780) (1) Data frame sent\nI0220 14:56:09.316148    2767 log.go:172] (0xc00013edc0) (0xc0007e6000) Stream removed, broadcasting: 5\nI0220 14:56:09.316199    2767 log.go:172] (0xc00013edc0) (0xc000632780) Stream removed, broadcasting: 1\nI0220 14:56:09.316213    2767 log.go:172] (0xc00013edc0) Go away received\nI0220 14:56:09.316672    2767 log.go:172] (0xc00013edc0) (0xc000632780) Stream removed, broadcasting: 1\nI0220 14:56:09.316682    2767 log.go:172] (0xc00013edc0) (0xc0007be000) Stream removed, broadcasting: 3\nI0220 14:56:09.316686    2767 log.go:172] (0xc00013edc0) (0xc0007e6000) Stream removed, broadcasting: 5\n"
Feb 20 14:56:09.322: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 14:56:09.322: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 14:56:09.329: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 20 14:56:19.339: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 14:56:19.339: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 14:56:19.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999129s
Feb 20 14:56:20.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991640515s
Feb 20 14:56:21.384: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984353028s
Feb 20 14:56:22.396: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977035213s
Feb 20 14:56:23.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964886361s
Feb 20 14:56:24.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.955781673s
Feb 20 14:56:25.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.947353707s
Feb 20 14:56:26.445: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.933028716s
Feb 20 14:56:27.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.915434859s
Feb 20 14:56:28.463: INFO: Verifying statefulset ss doesn't scale past 1 for another 906.294893ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9322
Feb 20 14:56:29.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 14:56:30.055: INFO: stderr: "I0220 14:56:29.697083    2789 log.go:172] (0xc0007a0630) (0xc0007bebe0) Create stream\nI0220 14:56:29.697185    2789 log.go:172] (0xc0007a0630) (0xc0007bebe0) Stream added, broadcasting: 1\nI0220 14:56:29.702409    2789 log.go:172] (0xc0007a0630) Reply frame received for 1\nI0220 14:56:29.702517    2789 log.go:172] (0xc0007a0630) (0xc0008c8000) Create stream\nI0220 14:56:29.702529    2789 log.go:172] (0xc0007a0630) (0xc0008c8000) Stream added, broadcasting: 3\nI0220 14:56:29.704675    2789 log.go:172] (0xc0007a0630) Reply frame received for 3\nI0220 14:56:29.704712    2789 log.go:172] (0xc0007a0630) (0xc0006ec000) Create stream\nI0220 14:56:29.704725    2789 log.go:172] (0xc0007a0630) (0xc0006ec000) Stream added, broadcasting: 5\nI0220 14:56:29.708057    2789 log.go:172] (0xc0007a0630) Reply frame received for 5\nI0220 14:56:29.905742    2789 log.go:172] (0xc0007a0630) Data frame received for 3\nI0220 14:56:29.905829    2789 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0220 14:56:29.905853    2789 log.go:172] (0xc0008c8000) (3) Data frame sent\nI0220 14:56:29.905891    2789 log.go:172] (0xc0007a0630) Data frame received for 5\nI0220 14:56:29.905902    2789 log.go:172] (0xc0006ec000) (5) Data frame handling\nI0220 14:56:29.905922    2789 log.go:172] (0xc0006ec000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 14:56:30.047107    2789 log.go:172] (0xc0007a0630) Data frame received for 1\nI0220 14:56:30.047141    2789 log.go:172] (0xc0007bebe0) (1) Data frame handling\nI0220 14:56:30.047158    2789 log.go:172] (0xc0007bebe0) (1) Data frame sent\nI0220 14:56:30.047174    2789 log.go:172] (0xc0007a0630) (0xc0007bebe0) Stream removed, broadcasting: 1\nI0220 14:56:30.047436    2789 log.go:172] (0xc0007a0630) (0xc0008c8000) Stream removed, broadcasting: 3\nI0220 14:56:30.047505    2789 log.go:172] (0xc0007a0630) (0xc0006ec000) Stream removed, broadcasting: 5\nI0220 14:56:30.047531    2789 log.go:172] (0xc0007a0630) (0xc0007bebe0) Stream removed, broadcasting: 1\nI0220 14:56:30.047538    2789 log.go:172] (0xc0007a0630) (0xc0008c8000) Stream removed, broadcasting: 3\nI0220 14:56:30.047542    2789 log.go:172] (0xc0007a0630) (0xc0006ec000) Stream removed, broadcasting: 5\nI0220 14:56:30.047782    2789 log.go:172] (0xc0007a0630) Go away received\n"
Feb 20 14:56:30.056: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 14:56:30.056: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 14:56:30.066: INFO: Found 1 stateful pods, waiting for 3
Feb 20 14:56:40.078: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 14:56:40.079: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 14:56:40.079: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 20 14:56:50.098: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 14:56:50.098: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 14:56:50.098: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 20 14:56:50.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 14:56:50.725: INFO: stderr: "I0220 14:56:50.347210    2807 log.go:172] (0xc0009220b0) (0xc00033e6e0) Create stream\nI0220 14:56:50.347334    2807 log.go:172] (0xc0009220b0) (0xc00033e6e0) Stream added, broadcasting: 1\nI0220 14:56:50.359708    2807 log.go:172] (0xc0009220b0) Reply frame received for 1\nI0220 14:56:50.359774    2807 log.go:172] (0xc0009220b0) (0xc0008ce000) Create stream\nI0220 14:56:50.359789    2807 log.go:172] (0xc0009220b0) (0xc0008ce000) Stream added, broadcasting: 3\nI0220 14:56:50.363272    2807 log.go:172] (0xc0009220b0) Reply frame received for 3\nI0220 14:56:50.363305    2807 log.go:172] (0xc0009220b0) (0xc00099c000) Create stream\nI0220 14:56:50.363332    2807 log.go:172] (0xc0009220b0) (0xc00099c000) Stream added, broadcasting: 5\nI0220 14:56:50.367824    2807 log.go:172] (0xc0009220b0) Reply frame received for 5\nI0220 14:56:50.504774    2807 log.go:172] (0xc0009220b0) Data frame received for 3\nI0220 14:56:50.504843    2807 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0220 14:56:50.504864    2807 log.go:172] (0xc0008ce000) (3) Data frame sent\nI0220 14:56:50.504959    2807 log.go:172] (0xc0009220b0) Data frame received for 5\nI0220 14:56:50.505001    2807 log.go:172] (0xc00099c000) (5) Data frame handling\nI0220 14:56:50.505039    2807 log.go:172] (0xc00099c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 14:56:50.707555    2807 log.go:172] (0xc0009220b0) Data frame received for 1\nI0220 14:56:50.707696    2807 log.go:172] (0xc00033e6e0) (1) Data frame handling\nI0220 14:56:50.707744    2807 log.go:172] (0xc00033e6e0) (1) Data frame sent\nI0220 14:56:50.707785    2807 log.go:172] (0xc0009220b0) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0220 14:56:50.707983    2807 log.go:172] (0xc0009220b0) (0xc0008ce000) Stream removed, broadcasting: 3\nI0220 14:56:50.710763    2807 log.go:172] (0xc0009220b0) (0xc00099c000) Stream removed, broadcasting: 5\nI0220 14:56:50.711076    2807 log.go:172] (0xc0009220b0) Go away received\nI0220 14:56:50.711190    2807 log.go:172] (0xc0009220b0) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0220 14:56:50.711227    2807 log.go:172] (0xc0009220b0) (0xc0008ce000) Stream removed, broadcasting: 3\nI0220 14:56:50.711249    2807 log.go:172] (0xc0009220b0) (0xc00099c000) Stream removed, broadcasting: 5\n"
Feb 20 14:56:50.726: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 14:56:50.726: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 14:56:50.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 14:56:51.223: INFO: stderr: "I0220 14:56:50.911559    2826 log.go:172] (0xc000104790) (0xc00063a780) Create stream\nI0220 14:56:50.911658    2826 log.go:172] (0xc000104790) (0xc00063a780) Stream added, broadcasting: 1\nI0220 14:56:50.914299    2826 log.go:172] (0xc000104790) Reply frame received for 1\nI0220 14:56:50.914324    2826 log.go:172] (0xc000104790) (0xc0008ee000) Create stream\nI0220 14:56:50.914334    2826 log.go:172] (0xc000104790) (0xc0008ee000) Stream added, broadcasting: 3\nI0220 14:56:50.915430    2826 log.go:172] (0xc000104790) Reply frame received for 3\nI0220 14:56:50.915463    2826 log.go:172] (0xc000104790) (0xc000170000) Create stream\nI0220 14:56:50.915477    2826 log.go:172] (0xc000104790) (0xc000170000) Stream added, broadcasting: 5\nI0220 14:56:50.916449    2826 log.go:172] (0xc000104790) Reply frame received for 5\nI0220 14:56:51.065102    2826 log.go:172] (0xc000104790) Data frame received for 5\nI0220 14:56:51.065128    2826 log.go:172] (0xc000170000) (5) Data frame handling\nI0220 14:56:51.065137    2826 log.go:172] (0xc000170000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 14:56:51.134667    2826 log.go:172] (0xc000104790) Data frame received for 3\nI0220 14:56:51.134699    2826 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0220 14:56:51.134714    2826 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0220 14:56:51.214405    2826 log.go:172] (0xc000104790) Data frame received for 1\nI0220 14:56:51.214595    2826 log.go:172] (0xc00063a780) (1) Data frame handling\nI0220 14:56:51.214647    2826 log.go:172] (0xc00063a780) (1) Data frame sent\nI0220 14:56:51.215643    2826 log.go:172] (0xc000104790) (0xc0008ee000) Stream removed, broadcasting: 3\nI0220 14:56:51.215944    2826 log.go:172] (0xc000104790) (0xc00063a780) Stream removed, broadcasting: 1\nI0220 14:56:51.216115    2826 log.go:172] (0xc000104790) (0xc000170000) Stream removed, broadcasting: 5\nI0220 14:56:51.216173    2826 log.go:172] (0xc000104790) Go away received\nI0220 14:56:51.216425    2826 log.go:172] (0xc000104790) (0xc00063a780) Stream removed, broadcasting: 1\nI0220 14:56:51.216444    2826 log.go:172] (0xc000104790) (0xc0008ee000) Stream removed, broadcasting: 3\nI0220 14:56:51.216459    2826 log.go:172] (0xc000104790) (0xc000170000) Stream removed, broadcasting: 5\n"
Feb 20 14:56:51.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 14:56:51.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 14:56:51.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 14:56:51.751: INFO: stderr: "I0220 14:56:51.456551    2842 log.go:172] (0xc000596420) (0xc000522640) Create stream\nI0220 14:56:51.456667    2842 log.go:172] (0xc000596420) (0xc000522640) Stream added, broadcasting: 1\nI0220 14:56:51.462324    2842 log.go:172] (0xc000596420) Reply frame received for 1\nI0220 14:56:51.462378    2842 log.go:172] (0xc000596420) (0xc00058e1e0) Create stream\nI0220 14:56:51.462391    2842 log.go:172] (0xc000596420) (0xc00058e1e0) Stream added, broadcasting: 3\nI0220 14:56:51.464775    2842 log.go:172] (0xc000596420) Reply frame received for 3\nI0220 14:56:51.464823    2842 log.go:172] (0xc000596420) (0xc0005943c0) Create stream\nI0220 14:56:51.464839    2842 log.go:172] (0xc000596420) (0xc0005943c0) Stream added, broadcasting: 5\nI0220 14:56:51.466057    2842 log.go:172] (0xc000596420) Reply frame received for 5\nI0220 14:56:51.566048    2842 log.go:172] (0xc000596420) Data frame received for 5\nI0220 14:56:51.566130    2842 log.go:172] (0xc0005943c0) (5) Data frame handling\nI0220 14:56:51.566162    2842 log.go:172] (0xc0005943c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 14:56:51.604820    2842 log.go:172] (0xc000596420) Data frame received for 3\nI0220 14:56:51.604854    2842 log.go:172] (0xc00058e1e0) (3) Data frame handling\nI0220 14:56:51.604882    2842 log.go:172] (0xc00058e1e0) (3) Data frame sent\nI0220 14:56:51.738240    2842 log.go:172] (0xc000596420) Data frame received for 1\nI0220 14:56:51.738358    2842 log.go:172] (0xc000596420) (0xc00058e1e0) Stream removed, broadcasting: 3\nI0220 14:56:51.738400    2842 log.go:172] (0xc000522640) (1) Data frame handling\nI0220 14:56:51.738410    2842 log.go:172] (0xc000522640) (1) Data frame sent\nI0220 14:56:51.738416    2842 log.go:172] (0xc000596420) (0xc0005943c0) Stream removed, broadcasting: 5\nI0220 14:56:51.738565    2842 log.go:172] (0xc000596420) (0xc000522640) Stream removed, broadcasting: 1\nI0220 14:56:51.738593    2842 log.go:172] (0xc000596420) Go away received\nI0220 14:56:51.739263    2842 log.go:172] (0xc000596420) (0xc000522640) Stream removed, broadcasting: 1\nI0220 14:56:51.739278    2842 log.go:172] (0xc000596420) (0xc00058e1e0) Stream removed, broadcasting: 3\nI0220 14:56:51.739291    2842 log.go:172] (0xc000596420) (0xc0005943c0) Stream removed, broadcasting: 5\n"
Feb 20 14:56:51.751: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 14:56:51.751: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 14:56:51.751: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 14:56:51.760: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 20 14:57:01.787: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 14:57:01.787: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 14:57:01.787: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 14:57:01.877: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999163s
Feb 20 14:57:02.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.934606916s
Feb 20 14:57:03.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.928132045s
Feb 20 14:57:04.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.913915749s
Feb 20 14:57:05.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.891511421s
Feb 20 14:57:07.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.854484861s
Feb 20 14:57:08.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.761019594s
Feb 20 14:57:09.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.062234796s
Feb 20 14:57:10.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.052338393s
Feb 20 14:57:11.784: INFO: Verifying statefulset ss doesn't scale past 3 for another 41.41313ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9322
Feb 20 14:57:12.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 14:57:13.514: INFO: stderr: "I0220 14:57:13.026182    2864 log.go:172] (0xc000944c60) (0xc0009410e0) Create stream\nI0220 14:57:13.026368    2864 log.go:172] (0xc000944c60) (0xc0009410e0) Stream added, broadcasting: 1\nI0220 14:57:13.043987    2864 log.go:172] (0xc000944c60) Reply frame received for 1\nI0220 14:57:13.044033    2864 log.go:172] (0xc000944c60) (0xc000940000) Create stream\nI0220 14:57:13.044040    2864 log.go:172] (0xc000944c60) (0xc000940000) Stream added, broadcasting: 3\nI0220 14:57:13.046345    2864 log.go:172] (0xc000944c60) Reply frame received for 3\nI0220 14:57:13.046380    2864 log.go:172] (0xc000944c60) (0xc0000d4320) Create stream\nI0220 14:57:13.046388    2864 log.go:172] (0xc000944c60) (0xc0000d4320) Stream added, broadcasting: 5\nI0220 14:57:13.048528    2864 log.go:172] (0xc000944c60) Reply frame received for 5\nI0220 14:57:13.259302    2864 log.go:172] (0xc000944c60) Data frame received for 5\nI0220 14:57:13.259409    2864 log.go:172] (0xc0000d4320) (5) Data frame handling\nI0220 14:57:13.259433    2864 log.go:172] (0xc0000d4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 14:57:13.259638    2864 log.go:172] (0xc000944c60) Data frame received for 3\nI0220 14:57:13.259658    2864 log.go:172] (0xc000940000) (3) Data frame handling\nI0220 14:57:13.259687    2864 log.go:172] (0xc000940000) (3) Data frame sent\nI0220 14:57:13.499093    2864 log.go:172] (0xc000944c60) (0xc000940000) Stream removed, broadcasting: 3\nI0220 14:57:13.499280    2864 log.go:172] (0xc000944c60) Data frame received for 1\nI0220 14:57:13.499319    2864 log.go:172] (0xc0009410e0) (1) Data frame handling\nI0220 14:57:13.499346    2864 log.go:172] (0xc0009410e0) (1) Data frame sent\nI0220 14:57:13.499382    2864 log.go:172] (0xc000944c60) (0xc0000d4320) Stream removed, broadcasting: 5\nI0220 14:57:13.499429    2864 log.go:172] (0xc000944c60) (0xc0009410e0) Stream removed, broadcasting: 1\nI0220 14:57:13.499456    2864 log.go:172] (0xc000944c60) Go away received\nI0220 14:57:13.500244    2864 log.go:172] (0xc000944c60) (0xc0009410e0) Stream removed, broadcasting: 1\nI0220 14:57:13.500264    2864 log.go:172] (0xc000944c60) (0xc000940000) Stream removed, broadcasting: 3\nI0220 14:57:13.500276    2864 log.go:172] (0xc000944c60) (0xc0000d4320) Stream removed, broadcasting: 5\n"
Feb 20 14:57:13.514: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 14:57:13.514: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 14:57:13.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 14:57:14.007: INFO: stderr: "I0220 14:57:13.760007    2880 log.go:172] (0xc000987130) (0xc000982f00) Create stream\nI0220 14:57:13.760198    2880 log.go:172] (0xc000987130) (0xc000982f00) Stream added, broadcasting: 1\nI0220 14:57:13.769908    2880 log.go:172] (0xc000987130) Reply frame received for 1\nI0220 14:57:13.769991    2880 log.go:172] (0xc000987130) (0xc000982000) Create stream\nI0220 14:57:13.770005    2880 log.go:172] (0xc000987130) (0xc000982000) Stream added, broadcasting: 3\nI0220 14:57:13.771927    2880 log.go:172] (0xc000987130) Reply frame received for 3\nI0220 14:57:13.772021    2880 log.go:172] (0xc000987130) (0xc0006b8640) Create stream\nI0220 14:57:13.772041    2880 log.go:172] (0xc000987130) (0xc0006b8640) Stream added, broadcasting: 5\nI0220 14:57:13.773647    2880 log.go:172] (0xc000987130) Reply frame received for 5\nI0220 14:57:13.886881    2880 log.go:172] (0xc000987130) Data frame received for 5\nI0220 14:57:13.887063    2880 log.go:172] (0xc0006b8640) (5) Data frame handling\nI0220 14:57:13.887121    2880 log.go:172] (0xc0006b8640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 14:57:13.887157    2880 log.go:172] (0xc000987130) Data frame received for 3\nI0220 14:57:13.887364    2880 log.go:172] (0xc000982000) (3) Data frame handling\nI0220 14:57:13.887405    2880 log.go:172] (0xc000982000) (3) Data frame sent\nI0220 14:57:13.996910    2880 log.go:172] (0xc000987130) Data frame received for 1\nI0220 14:57:13.997146    2880 log.go:172] (0xc000987130) (0xc0006b8640) Stream removed, broadcasting: 5\nI0220 14:57:13.997188    2880 log.go:172] (0xc000982f00) (1) Data frame handling\nI0220 14:57:13.997217    2880 log.go:172] (0xc000982f00) (1) Data frame sent\nI0220 14:57:13.997243    2880 log.go:172] (0xc000987130) (0xc000982000) Stream removed, broadcasting: 3\nI0220 14:57:13.997261    2880 log.go:172] (0xc000987130) (0xc000982f00) Stream removed, broadcasting: 1\nI0220 14:57:13.997270    2880 log.go:172] (0xc000987130) Go away received\nI0220 14:57:13.998074    2880 log.go:172] (0xc000987130) (0xc000982f00) Stream removed, broadcasting: 1\nI0220 14:57:13.998099    2880 log.go:172] (0xc000987130) (0xc000982000) Stream removed, broadcasting: 3\nI0220 14:57:13.998107    2880 log.go:172] (0xc000987130) (0xc0006b8640) Stream removed, broadcasting: 5\n"
Feb 20 14:57:14.007: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 14:57:14.007: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 14:57:14.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9322 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 14:57:14.501: INFO: stderr: "I0220 14:57:14.196277    2899 log.go:172] (0xc0005cc9a0) (0xc000372aa0) Create stream\nI0220 14:57:14.196464    2899 log.go:172] (0xc0005cc9a0) (0xc000372aa0) Stream added, broadcasting: 1\nI0220 14:57:14.205087    2899 log.go:172] (0xc0005cc9a0) Reply frame received for 1\nI0220 14:57:14.205167    2899 log.go:172] (0xc0005cc9a0) (0xc0008b2000) Create stream\nI0220 14:57:14.205213    2899 log.go:172] (0xc0005cc9a0) (0xc0008b2000) Stream added, broadcasting: 3\nI0220 14:57:14.208345    2899 log.go:172] (0xc0005cc9a0) Reply frame received for 3\nI0220 14:57:14.208450    2899 log.go:172] (0xc0005cc9a0) (0xc000752000) Create stream\nI0220 14:57:14.208487    2899 log.go:172] (0xc0005cc9a0) (0xc000752000) Stream added, broadcasting: 5\nI0220 14:57:14.210653    2899 log.go:172] (0xc0005cc9a0) Reply frame received for 5\nI0220 14:57:14.372053    2899 log.go:172] (0xc0005cc9a0) Data frame received for 5\nI0220 14:57:14.372131    2899 log.go:172] (0xc000752000) (5) Data frame handling\nI0220 14:57:14.372150    2899 log.go:172] (0xc000752000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 14:57:14.372558    2899 log.go:172] (0xc0005cc9a0) Data frame received for 3\nI0220 14:57:14.372593    2899 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0220 14:57:14.372609    2899 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0220 14:57:14.488811    2899 log.go:172] (0xc0005cc9a0) (0xc0008b2000) Stream removed, broadcasting: 3\nI0220 14:57:14.489143    2899 log.go:172] (0xc0005cc9a0) Data frame received for 1\nI0220 14:57:14.489224    2899 log.go:172] (0xc0005cc9a0) (0xc000752000) Stream removed, broadcasting: 5\nI0220 14:57:14.489307    2899 log.go:172] (0xc000372aa0) (1) Data frame handling\nI0220 14:57:14.489329    2899 log.go:172] (0xc000372aa0) (1) Data frame sent\nI0220 14:57:14.489335    2899 log.go:172] (0xc0005cc9a0) (0xc000372aa0) Stream removed, broadcasting: 1\nI0220 14:57:14.489349    2899 log.go:172] (0xc0005cc9a0) Go away received\nI0220 14:57:14.490143    2899 log.go:172] (0xc0005cc9a0) (0xc000372aa0) Stream removed, broadcasting: 1\nI0220 14:57:14.490165    2899 log.go:172] (0xc0005cc9a0) (0xc0008b2000) Stream removed, broadcasting: 3\nI0220 14:57:14.490180    2899 log.go:172] (0xc0005cc9a0) (0xc000752000) Stream removed, broadcasting: 5\n"
Feb 20 14:57:14.501: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 14:57:14.501: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 14:57:14.501: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 20 14:57:44.546: INFO: Deleting all statefulset in ns statefulset-9322
Feb 20 14:57:44.558: INFO: Scaling statefulset ss to 0
Feb 20 14:57:44.576: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 14:57:44.581: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:57:44.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9322" for this suite.
Feb 20 14:57:50.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:57:50.761: INFO: namespace statefulset-9322 deletion completed in 6.143386734s

• [SLOW TEST:112.290 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:57:50.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:57:59.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3640" for this suite.
Feb 20 14:58:21.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:58:22.093: INFO: namespace replication-controller-3640 deletion completed in 22.121931479s

• [SLOW TEST:31.332 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:58:22.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 20 14:58:22.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 20 14:58:22.446: INFO: stderr: ""
Feb 20 14:58:22.446: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:58:22.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8276" for this suite.
Feb 20 14:58:28.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:58:28.697: INFO: namespace kubectl-8276 deletion completed in 6.246386996s

• [SLOW TEST:6.604 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:58:28.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8e587468-8c5d-4d71-94aa-f05c03552d0d
STEP: Creating a pod to test consume secrets
Feb 20 14:58:28.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23" in namespace "projected-8953" to be "success or failure"
Feb 20 14:58:28.924: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23": Phase="Pending", Reason="", readiness=false. Elapsed: 33.738346ms
Feb 20 14:58:30.935: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044747094s
Feb 20 14:58:32.954: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06425868s
Feb 20 14:58:34.964: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074129837s
Feb 20 14:58:36.979: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08910934s
STEP: Saw pod success
Feb 20 14:58:36.979: INFO: Pod "pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23" satisfied condition "success or failure"
Feb 20 14:58:36.990: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23 container projected-secret-volume-test: 
STEP: delete the pod
Feb 20 14:58:37.125: INFO: Waiting for pod pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23 to disappear
Feb 20 14:58:37.163: INFO: Pod pod-projected-secrets-92e9385c-f73a-40ca-a768-d0e27dbeea23 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:58:37.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8953" for this suite.
Feb 20 14:58:43.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:58:43.358: INFO: namespace projected-8953 deletion completed in 6.19007555s

• [SLOW TEST:14.660 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:58:43.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 20 14:58:52.071: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b7cf6e6c-0ba1-4209-b5bf-f6a833ca2d1a"
Feb 20 14:58:52.071: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b7cf6e6c-0ba1-4209-b5bf-f6a833ca2d1a" in namespace "pods-2886" to be "terminated due to deadline exceeded"
Feb 20 14:58:52.075: INFO: Pod "pod-update-activedeadlineseconds-b7cf6e6c-0ba1-4209-b5bf-f6a833ca2d1a": Phase="Running", Reason="", readiness=true. Elapsed: 4.013355ms
Feb 20 14:58:54.085: INFO: Pod "pod-update-activedeadlineseconds-b7cf6e6c-0ba1-4209-b5bf-f6a833ca2d1a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014593902s
Feb 20 14:58:54.085: INFO: Pod "pod-update-activedeadlineseconds-b7cf6e6c-0ba1-4209-b5bf-f6a833ca2d1a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:58:54.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2886" for this suite.
Feb 20 14:59:00.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:59:00.236: INFO: namespace pods-2886 deletion completed in 6.143572558s

• [SLOW TEST:16.878 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:59:00.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 20 14:59:00.342: INFO: Waiting up to 5m0s for pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f" in namespace "emptydir-4549" to be "success or failure"
Feb 20 14:59:00.397: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.325404ms
Feb 20 14:59:02.404: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061804187s
Feb 20 14:59:04.411: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0687532s
Feb 20 14:59:06.420: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077848734s
Feb 20 14:59:08.444: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102041096s
STEP: Saw pod success
Feb 20 14:59:08.445: INFO: Pod "pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f" satisfied condition "success or failure"
Feb 20 14:59:08.449: INFO: Trying to get logs from node iruya-node pod pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f container test-container: 
STEP: delete the pod
Feb 20 14:59:08.543: INFO: Waiting for pod pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f to disappear
Feb 20 14:59:08.549: INFO: Pod pod-697f35dd-ec1b-49ab-b287-a5fa9a873a4f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 14:59:08.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4549" for this suite.
Feb 20 14:59:14.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 14:59:14.815: INFO: namespace emptydir-4549 deletion completed in 6.260445134s

• [SLOW TEST:14.578 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 14:59:14.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-cbe98828-9d27-4dc0-bf2d-aea3e5292992 in namespace container-probe-8927
Feb 20 14:59:23.032: INFO: Started pod busybox-cbe98828-9d27-4dc0-bf2d-aea3e5292992 in namespace container-probe-8927
STEP: checking the pod's current state and verifying that restartCount is present
Feb 20 14:59:23.035: INFO: Initial restart count of pod busybox-cbe98828-9d27-4dc0-bf2d-aea3e5292992 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:03:24.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8927" for this suite.
Feb 20 15:03:30.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:03:30.669: INFO: namespace container-probe-8927 deletion completed in 6.146084599s

• [SLOW TEST:255.854 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:03:30.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 20 15:03:30.808: INFO: Waiting up to 5m0s for pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153" in namespace "emptydir-9021" to be "success or failure"
Feb 20 15:03:30.826: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153": Phase="Pending", Reason="", readiness=false. Elapsed: 17.271108ms
Feb 20 15:03:32.836: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027382715s
Feb 20 15:03:34.842: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033109118s
Feb 20 15:03:36.856: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047075538s
Feb 20 15:03:38.868: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059605613s
STEP: Saw pod success
Feb 20 15:03:38.868: INFO: Pod "pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153" satisfied condition "success or failure"
Feb 20 15:03:38.872: INFO: Trying to get logs from node iruya-node pod pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153 container test-container: 
STEP: delete the pod
Feb 20 15:03:39.012: INFO: Waiting for pod pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153 to disappear
Feb 20 15:03:39.018: INFO: Pod pod-ccd8ccfb-7b17-42e0-8c4d-632e8f153153 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:03:39.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9021" for this suite.
Feb 20 15:03:45.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:03:45.148: INFO: namespace emptydir-9021 deletion completed in 6.121787348s

• [SLOW TEST:14.479 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:03:45.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4230
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4230
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4230
Feb 20 15:03:45.363: INFO: Found 0 stateful pods, waiting for 1
Feb 20 15:03:55.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 20 15:03:55.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 15:03:57.881: INFO: stderr: "I0220 15:03:57.534870    2939 log.go:172] (0xc0005a8420) (0xc0005e2640) Create stream\nI0220 15:03:57.534908    2939 log.go:172] (0xc0005a8420) (0xc0005e2640) Stream added, broadcasting: 1\nI0220 15:03:57.539896    2939 log.go:172] (0xc0005a8420) Reply frame received for 1\nI0220 15:03:57.539930    2939 log.go:172] (0xc0005a8420) (0xc00046e1e0) Create stream\nI0220 15:03:57.539940    2939 log.go:172] (0xc0005a8420) (0xc00046e1e0) Stream added, broadcasting: 3\nI0220 15:03:57.542043    2939 log.go:172] (0xc0005a8420) Reply frame received for 3\nI0220 15:03:57.542063    2939 log.go:172] (0xc0005a8420) (0xc0005e26e0) Create stream\nI0220 15:03:57.542069    2939 log.go:172] (0xc0005a8420) (0xc0005e26e0) Stream added, broadcasting: 5\nI0220 15:03:57.543974    2939 log.go:172] (0xc0005a8420) Reply frame received for 5\nI0220 15:03:57.663656    2939 log.go:172] (0xc0005a8420) Data frame received for 5\nI0220 15:03:57.663694    2939 log.go:172] (0xc0005e26e0) (5) Data frame handling\nI0220 15:03:57.663715    2939 log.go:172] (0xc0005e26e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 15:03:57.706701    2939 log.go:172] (0xc0005a8420) Data frame received for 3\nI0220 15:03:57.706789    2939 log.go:172] (0xc00046e1e0) (3) Data frame handling\nI0220 15:03:57.706863    2939 log.go:172] (0xc00046e1e0) (3) Data frame sent\nI0220 15:03:57.859649    2939 log.go:172] (0xc0005a8420) (0xc00046e1e0) Stream removed, broadcasting: 3\nI0220 15:03:57.859839    2939 log.go:172] (0xc0005a8420) Data frame received for 1\nI0220 15:03:57.859894    2939 log.go:172] (0xc0005a8420) (0xc0005e26e0) Stream removed, broadcasting: 5\nI0220 15:03:57.859981    2939 log.go:172] (0xc0005e2640) (1) Data frame handling\nI0220 15:03:57.860013    2939 log.go:172] (0xc0005e2640) (1) Data frame sent\nI0220 15:03:57.860036    2939 log.go:172] (0xc0005a8420) (0xc0005e2640) Stream removed, broadcasting: 1\nI0220 15:03:57.860070    2939 log.go:172] (0xc0005a8420) Go away received\nI0220 15:03:57.860656    2939 log.go:172] (0xc0005a8420) (0xc0005e2640) Stream removed, broadcasting: 1\nI0220 15:03:57.860705    2939 log.go:172] (0xc0005a8420) (0xc00046e1e0) Stream removed, broadcasting: 3\nI0220 15:03:57.860738    2939 log.go:172] (0xc0005a8420) (0xc0005e26e0) Stream removed, broadcasting: 5\n"
Feb 20 15:03:57.881: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 15:03:57.881: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 15:03:57.896: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 20 15:04:07.921: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 15:04:07.921: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 15:04:08.077: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 15:04:08.077: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:08.077: INFO: 
Feb 20 15:04:08.077: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 20 15:04:09.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.859708602s
Feb 20 15:04:10.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.459344032s
Feb 20 15:04:11.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.360043998s
Feb 20 15:04:13.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.337806797s
Feb 20 15:04:15.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.243649908s
Feb 20 15:04:16.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.558091991s
Feb 20 15:04:17.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 508.797005ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4230
Feb 20 15:04:18.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:04:18.942: INFO: stderr: "I0220 15:04:18.654488    2969 log.go:172] (0xc0001248f0) (0xc000320d20) Create stream\nI0220 15:04:18.654629    2969 log.go:172] (0xc0001248f0) (0xc000320d20) Stream added, broadcasting: 1\nI0220 15:04:18.659982    2969 log.go:172] (0xc0001248f0) Reply frame received for 1\nI0220 15:04:18.660008    2969 log.go:172] (0xc0001248f0) (0xc0009420a0) Create stream\nI0220 15:04:18.660016    2969 log.go:172] (0xc0001248f0) (0xc0009420a0) Stream added, broadcasting: 3\nI0220 15:04:18.661598    2969 log.go:172] (0xc0001248f0) Reply frame received for 3\nI0220 15:04:18.661628    2969 log.go:172] (0xc0001248f0) (0xc0008a5680) Create stream\nI0220 15:04:18.661659    2969 log.go:172] (0xc0001248f0) (0xc0008a5680) Stream added, broadcasting: 5\nI0220 15:04:18.663257    2969 log.go:172] (0xc0001248f0) Reply frame received for 5\nI0220 15:04:18.752470    2969 log.go:172] (0xc0001248f0) Data frame received for 5\nI0220 15:04:18.752517    2969 log.go:172] (0xc0008a5680) (5) Data frame handling\nI0220 15:04:18.752526    2969 log.go:172] (0xc0008a5680) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 15:04:18.752533    2969 log.go:172] (0xc0001248f0) Data frame received for 3\nI0220 15:04:18.752540    2969 log.go:172] (0xc0009420a0) (3) Data frame handling\nI0220 15:04:18.752544    2969 log.go:172] (0xc0009420a0) (3) Data frame sent\nI0220 15:04:18.934666    2969 log.go:172] (0xc0001248f0) (0xc0009420a0) Stream removed, broadcasting: 3\nI0220 15:04:18.934718    2969 log.go:172] (0xc0001248f0) Data frame received for 1\nI0220 15:04:18.934732    2969 log.go:172] (0xc000320d20) (1) Data frame handling\nI0220 15:04:18.934738    2969 log.go:172] (0xc000320d20) (1) Data frame sent\nI0220 15:04:18.934747    2969 log.go:172] (0xc0001248f0) (0xc000320d20) Stream removed, broadcasting: 1\nI0220 15:04:18.934757    2969 log.go:172] (0xc0001248f0) (0xc0008a5680) Stream removed, broadcasting: 5\nI0220 15:04:18.934767    2969 log.go:172] (0xc0001248f0) Go away received\nI0220 15:04:18.935040    2969 log.go:172] (0xc0001248f0) (0xc000320d20) Stream removed, broadcasting: 1\nI0220 15:04:18.935052    2969 log.go:172] (0xc0001248f0) (0xc0009420a0) Stream removed, broadcasting: 3\nI0220 15:04:18.935057    2969 log.go:172] (0xc0001248f0) (0xc0008a5680) Stream removed, broadcasting: 5\n"
Feb 20 15:04:18.942: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 15:04:18.942: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 15:04:18.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:04:19.453: INFO: stderr: "I0220 15:04:19.201482    2984 log.go:172] (0xc0009e0370) (0xc00098c640) Create stream\nI0220 15:04:19.201781    2984 log.go:172] (0xc0009e0370) (0xc00098c640) Stream added, broadcasting: 1\nI0220 15:04:19.208337    2984 log.go:172] (0xc0009e0370) Reply frame received for 1\nI0220 15:04:19.208379    2984 log.go:172] (0xc0009e0370) (0xc00093a000) Create stream\nI0220 15:04:19.208396    2984 log.go:172] (0xc0009e0370) (0xc00093a000) Stream added, broadcasting: 3\nI0220 15:04:19.209839    2984 log.go:172] (0xc0009e0370) Reply frame received for 3\nI0220 15:04:19.209894    2984 log.go:172] (0xc0009e0370) (0xc0008bc000) Create stream\nI0220 15:04:19.209912    2984 log.go:172] (0xc0009e0370) (0xc0008bc000) Stream added, broadcasting: 5\nI0220 15:04:19.211432    2984 log.go:172] (0xc0009e0370) Reply frame received for 5\nI0220 15:04:19.361977    2984 log.go:172] (0xc0009e0370) Data frame received for 5\nI0220 15:04:19.362070    2984 log.go:172] (0xc0008bc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0220 15:04:19.362296    2984 log.go:172] (0xc0008bc000) (5) Data frame sent\nI0220 15:04:19.362325    2984 log.go:172] (0xc0009e0370) Data frame received for 5\nI0220 15:04:19.362336    2984 log.go:172] (0xc0008bc000) (5) Data frame handling\nI0220 15:04:19.362349    2984 log.go:172] (0xc0008bc000) (5) Data frame sent\nI0220 15:04:19.362360    2984 log.go:172] (0xc0009e0370) Data frame received for 5\nI0220 15:04:19.362379    2984 log.go:172] (0xc0008bc000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0220 15:04:19.362409    2984 log.go:172] (0xc0008bc000) (5) Data frame sent\nI0220 15:04:19.362420    2984 log.go:172] (0xc0009e0370) Data frame received for 3\nI0220 15:04:19.362434    2984 log.go:172] (0xc00093a000) (3) Data frame handling\nI0220 15:04:19.362449    2984 log.go:172] (0xc00093a000) (3) Data frame sent\nI0220 15:04:19.444010    2984 log.go:172] (0xc0009e0370) (0xc00093a000) Stream removed, broadcasting: 3\nI0220 15:04:19.444301    2984 log.go:172] (0xc0009e0370) Data frame received for 1\nI0220 15:04:19.444625    2984 log.go:172] (0xc0009e0370) (0xc0008bc000) Stream removed, broadcasting: 5\nI0220 15:04:19.444988    2984 log.go:172] (0xc00098c640) (1) Data frame handling\nI0220 15:04:19.445191    2984 log.go:172] (0xc00098c640) (1) Data frame sent\nI0220 15:04:19.445233    2984 log.go:172] (0xc0009e0370) (0xc00098c640) Stream removed, broadcasting: 1\nI0220 15:04:19.445355    2984 log.go:172] (0xc0009e0370) Go away received\nI0220 15:04:19.446291    2984 log.go:172] (0xc0009e0370) (0xc00098c640) Stream removed, broadcasting: 1\nI0220 15:04:19.446341    2984 log.go:172] (0xc0009e0370) (0xc00093a000) Stream removed, broadcasting: 3\nI0220 15:04:19.446364    2984 log.go:172] (0xc0009e0370) (0xc0008bc000) Stream removed, broadcasting: 5\n"
Feb 20 15:04:19.453: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 15:04:19.453: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 15:04:19.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:04:19.824: INFO: stderr: "I0220 15:04:19.585647    3004 log.go:172] (0xc000a64160) (0xc0009ac140) Create stream\nI0220 15:04:19.585771    3004 log.go:172] (0xc000a64160) (0xc0009ac140) Stream added, broadcasting: 1\nI0220 15:04:19.590240    3004 log.go:172] (0xc000a64160) Reply frame received for 1\nI0220 15:04:19.590294    3004 log.go:172] (0xc000a64160) (0xc0008c4000) Create stream\nI0220 15:04:19.590300    3004 log.go:172] (0xc000a64160) (0xc0008c4000) Stream added, broadcasting: 3\nI0220 15:04:19.591675    3004 log.go:172] (0xc000a64160) Reply frame received for 3\nI0220 15:04:19.591702    3004 log.go:172] (0xc000a64160) (0xc0003c61e0) Create stream\nI0220 15:04:19.591714    3004 log.go:172] (0xc000a64160) (0xc0003c61e0) Stream added, broadcasting: 5\nI0220 15:04:19.593279    3004 log.go:172] (0xc000a64160) Reply frame received for 5\nI0220 15:04:19.688679    3004 log.go:172] (0xc000a64160) Data frame received for 3\nI0220 15:04:19.688722    3004 log.go:172] (0xc0008c4000) (3) Data frame handling\nI0220 15:04:19.688733    3004 log.go:172] (0xc0008c4000) (3) Data frame sent\nI0220 15:04:19.688751    3004 log.go:172] (0xc000a64160) Data frame received for 5\nI0220 15:04:19.688756    3004 log.go:172] (0xc0003c61e0) (5) Data frame handling\nI0220 15:04:19.688767    3004 log.go:172] (0xc0003c61e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0220 15:04:19.809508    3004 log.go:172] (0xc000a64160) (0xc0008c4000) Stream removed, broadcasting: 3\nI0220 15:04:19.809734    3004 log.go:172] (0xc000a64160) Data frame received for 1\nI0220 15:04:19.809924    3004 log.go:172] (0xc000a64160) (0xc0003c61e0) Stream removed, broadcasting: 5\nI0220 15:04:19.810029    3004 log.go:172] (0xc0009ac140) (1) Data frame handling\nI0220 15:04:19.810057    3004 log.go:172] (0xc0009ac140) (1) Data frame sent\nI0220 15:04:19.810081    3004 log.go:172] (0xc000a64160) (0xc0009ac140) Stream removed, broadcasting: 1\nI0220 15:04:19.810109    3004 log.go:172] (0xc000a64160) Go away received\nI0220 15:04:19.811185    3004 log.go:172] (0xc000a64160) (0xc0009ac140) Stream removed, broadcasting: 1\nI0220 15:04:19.811207    3004 log.go:172] (0xc000a64160) (0xc0008c4000) Stream removed, broadcasting: 3\nI0220 15:04:19.811217    3004 log.go:172] (0xc000a64160) (0xc0003c61e0) Stream removed, broadcasting: 5\n"
Feb 20 15:04:19.824: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 15:04:19.824: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 15:04:19.829: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 15:04:19.829: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 15:04:19.829: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 20 15:04:19.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 15:04:20.267: INFO: stderr: "I0220 15:04:19.989758    3025 log.go:172] (0xc00094e370) (0xc0008d25a0) Create stream\nI0220 15:04:19.989818    3025 log.go:172] (0xc00094e370) (0xc0008d25a0) Stream added, broadcasting: 1\nI0220 15:04:19.997042    3025 log.go:172] (0xc00094e370) Reply frame received for 1\nI0220 15:04:19.997083    3025 log.go:172] (0xc00094e370) (0xc000662500) Create stream\nI0220 15:04:19.997092    3025 log.go:172] (0xc00094e370) (0xc000662500) Stream added, broadcasting: 3\nI0220 15:04:19.998299    3025 log.go:172] (0xc00094e370) Reply frame received for 3\nI0220 15:04:19.998315    3025 log.go:172] (0xc00094e370) (0xc0008d26e0) Create stream\nI0220 15:04:19.998322    3025 log.go:172] (0xc00094e370) (0xc0008d26e0) Stream added, broadcasting: 5\nI0220 15:04:19.999671    3025 log.go:172] (0xc00094e370) Reply frame received for 5\nI0220 15:04:20.107758    3025 log.go:172] (0xc00094e370) Data frame received for 3\nI0220 15:04:20.107793    3025 log.go:172] (0xc000662500) (3) Data frame handling\nI0220 15:04:20.107807    3025 log.go:172] (0xc000662500) (3) Data frame sent\nI0220 15:04:20.107848    3025 log.go:172] (0xc00094e370) Data frame received for 5\nI0220 15:04:20.107868    3025 log.go:172] (0xc0008d26e0) (5) Data frame handling\nI0220 15:04:20.107887    3025 log.go:172] (0xc0008d26e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 15:04:20.252274    3025 log.go:172] (0xc00094e370) Data frame received for 1\nI0220 15:04:20.252494    3025 log.go:172] (0xc0008d25a0) (1) Data frame handling\nI0220 15:04:20.252559    3025 log.go:172] (0xc0008d25a0) (1) Data frame sent\nI0220 15:04:20.252588    3025 log.go:172] (0xc00094e370) (0xc0008d25a0) Stream removed, broadcasting: 1\nI0220 15:04:20.254829    3025 log.go:172] (0xc00094e370) (0xc000662500) Stream removed, broadcasting: 3\nI0220 15:04:20.255100    3025 log.go:172] (0xc00094e370) (0xc0008d26e0) Stream removed, broadcasting: 5\nI0220 15:04:20.255123    3025 log.go:172] (0xc00094e370) Go away received\nI0220 15:04:20.255248    3025 log.go:172] (0xc00094e370) (0xc0008d25a0) Stream removed, broadcasting: 1\nI0220 15:04:20.255296    3025 log.go:172] (0xc00094e370) (0xc000662500) Stream removed, broadcasting: 3\nI0220 15:04:20.255327    3025 log.go:172] (0xc00094e370) (0xc0008d26e0) Stream removed, broadcasting: 5\n"
Feb 20 15:04:20.267: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 15:04:20.267: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 15:04:20.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 15:04:20.777: INFO: stderr: "I0220 15:04:20.545277    3044 log.go:172] (0xc000a16420) (0xc000a508c0) Create stream\nI0220 15:04:20.545767    3044 log.go:172] (0xc000a16420) (0xc000a508c0) Stream added, broadcasting: 1\nI0220 15:04:20.560032    3044 log.go:172] (0xc000a16420) Reply frame received for 1\nI0220 15:04:20.560110    3044 log.go:172] (0xc000a16420) (0xc000a50000) Create stream\nI0220 15:04:20.560126    3044 log.go:172] (0xc000a16420) (0xc000a50000) Stream added, broadcasting: 3\nI0220 15:04:20.561317    3044 log.go:172] (0xc000a16420) Reply frame received for 3\nI0220 15:04:20.561386    3044 log.go:172] (0xc000a16420) (0xc00058c280) Create stream\nI0220 15:04:20.561406    3044 log.go:172] (0xc000a16420) (0xc00058c280) Stream added, broadcasting: 5\nI0220 15:04:20.562696    3044 log.go:172] (0xc000a16420) Reply frame received for 5\nI0220 15:04:20.651187    3044 log.go:172] (0xc000a16420) Data frame received for 5\nI0220 15:04:20.651287    3044 log.go:172] (0xc00058c280) (5) Data frame handling\nI0220 15:04:20.651322    3044 log.go:172] (0xc00058c280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 15:04:20.673894    3044 log.go:172] (0xc000a16420) Data frame received for 3\nI0220 15:04:20.674007    3044 log.go:172] (0xc000a50000) (3) Data frame handling\nI0220 15:04:20.674067    3044 log.go:172] (0xc000a50000) (3) Data frame sent\nI0220 15:04:20.768347    3044 log.go:172] (0xc000a16420) Data frame received for 1\nI0220 15:04:20.768478    3044 log.go:172] (0xc000a16420) (0xc00058c280) Stream removed, broadcasting: 5\nI0220 15:04:20.768539    3044 log.go:172] (0xc000a508c0) (1) Data frame handling\nI0220 15:04:20.768565    3044 log.go:172] (0xc000a508c0) (1) Data frame sent\nI0220 15:04:20.768594    3044 log.go:172] (0xc000a16420) (0xc000a50000) Stream removed, broadcasting: 3\nI0220 15:04:20.768640    3044 log.go:172] (0xc000a16420) (0xc000a508c0) Stream removed, broadcasting: 1\nI0220 15:04:20.769151    3044 log.go:172] (0xc000a16420) (0xc000a508c0) Stream removed, broadcasting: 1\nI0220 15:04:20.769164    3044 log.go:172] (0xc000a16420) (0xc000a50000) Stream removed, broadcasting: 3\nI0220 15:04:20.769170    3044 log.go:172] (0xc000a16420) (0xc00058c280) Stream removed, broadcasting: 5\n"
Feb 20 15:04:20.777: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 15:04:20.777: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 15:04:20.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 15:04:21.283: INFO: stderr: "I0220 15:04:20.977375    3065 log.go:172] (0xc00097a580) (0xc0006ecaa0) Create stream\nI0220 15:04:20.977437    3065 log.go:172] (0xc00097a580) (0xc0006ecaa0) Stream added, broadcasting: 1\nI0220 15:04:20.982575    3065 log.go:172] (0xc00097a580) Reply frame received for 1\nI0220 15:04:20.982649    3065 log.go:172] (0xc00097a580) (0xc0007f2000) Create stream\nI0220 15:04:20.982665    3065 log.go:172] (0xc00097a580) (0xc0007f2000) Stream added, broadcasting: 3\nI0220 15:04:20.984320    3065 log.go:172] (0xc00097a580) Reply frame received for 3\nI0220 15:04:20.984338    3065 log.go:172] (0xc00097a580) (0xc0007f20a0) Create stream\nI0220 15:04:20.984344    3065 log.go:172] (0xc00097a580) (0xc0007f20a0) Stream added, broadcasting: 5\nI0220 15:04:20.985472    3065 log.go:172] (0xc00097a580) Reply frame received for 5\nI0220 15:04:21.121436    3065 log.go:172] (0xc00097a580) Data frame received for 5\nI0220 15:04:21.123053    3065 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0220 15:04:21.123085    3065 log.go:172] (0xc0007f20a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0220 15:04:21.154147    3065 log.go:172] (0xc00097a580) Data frame received for 3\nI0220 15:04:21.154178    3065 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0220 15:04:21.154266    3065 log.go:172] (0xc0007f2000) (3) Data frame sent\nI0220 15:04:21.269222    3065 log.go:172] (0xc00097a580) (0xc0007f2000) Stream removed, broadcasting: 3\nI0220 15:04:21.269451    3065 log.go:172] (0xc00097a580) Data frame received for 1\nI0220 15:04:21.269709    3065 log.go:172] (0xc00097a580) (0xc0007f20a0) Stream removed, broadcasting: 5\nI0220 15:04:21.269899    3065 log.go:172] (0xc0006ecaa0) (1) Data frame handling\nI0220 15:04:21.269975    3065 log.go:172] (0xc0006ecaa0) (1) Data frame sent\nI0220 15:04:21.270063    3065 log.go:172] (0xc00097a580) (0xc0006ecaa0) Stream removed, broadcasting: 1\nI0220 15:04:21.271094    3065 log.go:172] (0xc00097a580) (0xc0006ecaa0) Stream removed, broadcasting: 1\nI0220 15:04:21.271161    3065 log.go:172] (0xc00097a580) (0xc0007f2000) Stream removed, broadcasting: 3\nI0220 15:04:21.271195    3065 log.go:172] (0xc00097a580) (0xc0007f20a0) Stream removed, broadcasting: 5\nI0220 15:04:21.271284    3065 log.go:172] (0xc00097a580) Go away received\n"
Feb 20 15:04:21.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 15:04:21.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 15:04:21.283: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 15:04:21.290: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 20 15:04:31.305: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 15:04:31.305: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 15:04:31.305: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 15:04:31.348: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:31.348: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:31.348: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:31.348: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:31.348: INFO: 
Feb 20 15:04:31.348: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:33.446: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:33.446: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:33.446: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:33.446: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:33.447: INFO: 
Feb 20 15:04:33.447: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:34.460: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:34.460: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:34.460: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:34.460: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:34.460: INFO: 
Feb 20 15:04:34.460: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:35.764: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:35.764: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:35.764: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:35.764: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:35.764: INFO: 
Feb 20 15:04:35.764: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:36.775: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:36.775: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:36.775: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:36.775: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:36.775: INFO: 
Feb 20 15:04:36.775: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:37.785: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 15:04:37.785: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:37.785: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:37.785: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:37.785: INFO: 
Feb 20 15:04:37.785: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 15:04:38.813: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 15:04:38.814: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:38.814: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:38.814: INFO: 
Feb 20 15:04:38.814: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 20 15:04:39.827: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 15:04:39.827: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:39.827: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:39.827: INFO: 
Feb 20 15:04:39.827: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 20 15:04:40.841: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 15:04:40.841: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:03:45 +0000 UTC  }]
Feb 20 15:04:40.841: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 15:04:08 +0000 UTC  }]
Feb 20 15:04:40.841: INFO: 
Feb 20 15:04:40.841: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4230
Feb 20 15:04:41.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:04:42.103: INFO: rc: 1
Feb 20 15:04:42.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001fbb470 exit status 1   true [0xc002adc658 0xc002adc670 0xc002adc688] [0xc002adc658 0xc002adc670 0xc002adc688] [0xc002adc668 0xc002adc680] [0xba6c50 0xba6c50] 0xc0028ff620 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 20 15:04:52.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:04:52.217: INFO: rc: 1
Feb 20 15:04:52.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fbb530 exit status 1   true [0xc002adc690 0xc002adc6a8 0xc002adc6c0] [0xc002adc690 0xc002adc6a8 0xc002adc6c0] [0xc002adc6a0 0xc002adc6b8] [0xba6c50 0xba6c50] 0xc0028ff980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:02.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:02.375: INFO: rc: 1
Feb 20 15:05:02.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fbb650 exit status 1   true [0xc002adc6c8 0xc002adc6e0 0xc002adc6f8] [0xc002adc6c8 0xc002adc6e0 0xc002adc6f8] [0xc002adc6d8 0xc002adc6f0] [0xba6c50 0xba6c50] 0xc0028ffc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:12.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:12.477: INFO: rc: 1
Feb 20 15:05:12.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002671da0 exit status 1   true [0xc001b2ecb0 0xc001b2ecc8 0xc001b2ece0] [0xc001b2ecb0 0xc001b2ecc8 0xc001b2ece0] [0xc001b2ecc0 0xc001b2ecd8] [0xba6c50 0xba6c50] 0xc0027df380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:22.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:22.593: INFO: rc: 1
Feb 20 15:05:22.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a0c0 exit status 1   true [0xc000010390 0xc0001a0558 0xc0001a0700] [0xc000010390 0xc0001a0558 0xc0001a0700] [0xc0001a0430 0xc0001a06c8] [0xba6c50 0xba6c50] 0xc00283a4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:32.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:32.721: INFO: rc: 1
Feb 20 15:05:32.721: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00225e0f0 exit status 1   true [0xc001634000 0xc001634038 0xc0016340e0] [0xc001634000 0xc001634038 0xc0016340e0] [0xc001634028 0xc0016340a0] [0xba6c50 0xba6c50] 0xc00262aba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:42.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:42.862: INFO: rc: 1
Feb 20 15:05:42.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00225e210 exit status 1   true [0xc001634100 0xc001634160 0xc0016341e0] [0xc001634100 0xc001634160 0xc0016341e0] [0xc001634150 0xc0016341c0] [0xba6c50 0xba6c50] 0xc00262bce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:05:52.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:05:52.948: INFO: rc: 1
Feb 20 15:05:52.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a1b0 exit status 1   true [0xc0001a08b0 0xc0001a0ac0 0xc0001a0d30] [0xc0001a08b0 0xc0001a0ac0 0xc0001a0d30] [0xc0001a0a40 0xc0001a0c60] [0xba6c50 0xba6c50] 0xc00283aa20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:02.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:03.075: INFO: rc: 1
Feb 20 15:06:03.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749530 exit status 1   true [0xc002904000 0xc002904018 0xc002904030] [0xc002904000 0xc002904018 0xc002904030] [0xc002904010 0xc002904028] [0xba6c50 0xba6c50] 0xc0022e2d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:13.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:13.204: INFO: rc: 1
Feb 20 15:06:13.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749620 exit status 1   true [0xc002904038 0xc002904050 0xc002904068] [0xc002904038 0xc002904050 0xc002904068] [0xc002904048 0xc002904060] [0xba6c50 0xba6c50] 0xc001862060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:23.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:23.341: INFO: rc: 1
Feb 20 15:06:23.342: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024940f0 exit status 1   true [0xc001600018 0xc001600120 0xc0016001a8] [0xc001600018 0xc001600120 0xc0016001a8] [0xc0016000a8 0xc001600180] [0xba6c50 0xba6c50] 0xc00191f260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:33.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:33.473: INFO: rc: 1
Feb 20 15:06:33.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749710 exit status 1   true [0xc002904070 0xc002904088 0xc0029040a0] [0xc002904070 0xc002904088 0xc0029040a0] [0xc002904080 0xc002904098] [0xba6c50 0xba6c50] 0xc001c0c120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:43.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:43.685: INFO: rc: 1
Feb 20 15:06:43.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749800 exit status 1   true [0xc0029040a8 0xc0029040c0 0xc0029040d8] [0xc0029040a8 0xc0029040c0 0xc0029040d8] [0xc0029040b8 0xc0029040d0] [0xba6c50 0xba6c50] 0xc001c0daa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:06:53.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:06:53.842: INFO: rc: 1
Feb 20 15:06:53.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a270 exit status 1   true [0xc0001a0e68 0xc0001a1050 0xc0001a1138] [0xc0001a0e68 0xc0001a1050 0xc0001a1138] [0xc0001a0f58 0xc0001a10e8] [0xba6c50 0xba6c50] 0xc00283b140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:03.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:03.963: INFO: rc: 1
Feb 20 15:07:03.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007498f0 exit status 1   true [0xc0029040e0 0xc0029040f8 0xc002904110] [0xc0029040e0 0xc0029040f8 0xc002904110] [0xc0029040f0 0xc002904108] [0xba6c50 0xba6c50] 0xc001b133e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:13.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:14.087: INFO: rc: 1
Feb 20 15:07:14.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a090 exit status 1   true [0xc0001a00a8 0xc0001a0630 0xc0001a08b0] [0xc0001a00a8 0xc0001a0630 0xc0001a08b0] [0xc0001a0558 0xc0001a0700] [0xba6c50 0xba6c50] 0xc001c0c360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:24.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:24.175: INFO: rc: 1
Feb 20 15:07:24.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749560 exit status 1   true [0xc002904000 0xc002904018 0xc002904030] [0xc002904000 0xc002904018 0xc002904030] [0xc002904010 0xc002904028] [0xba6c50 0xba6c50] 0xc001863860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:34.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:34.355: INFO: rc: 1
Feb 20 15:07:34.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749680 exit status 1   true [0xc002904038 0xc002904050 0xc002904068] [0xc002904038 0xc002904050 0xc002904068] [0xc002904048 0xc002904060] [0xba6c50 0xba6c50] 0xc0022e2d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:44.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:44.485: INFO: rc: 1
Feb 20 15:07:44.485: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a1e0 exit status 1   true [0xc0001a0968 0xc0001a0af0 0xc0001a0e68] [0xc0001a0968 0xc0001a0af0 0xc0001a0e68] [0xc0001a0ac0 0xc0001a0d30] [0xba6c50 0xba6c50] 0xc001c0dc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:07:54.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:07:54.584: INFO: rc: 1
Feb 20 15:07:54.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749770 exit status 1   true [0xc002904070 0xc002904088 0xc0029040a0] [0xc002904070 0xc002904088 0xc0029040a0] [0xc002904080 0xc002904098] [0xba6c50 0xba6c50] 0xc00283a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:04.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:04.688: INFO: rc: 1
Feb 20 15:08:04.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749890 exit status 1   true [0xc0029040a8 0xc0029040c0 0xc0029040d8] [0xc0029040a8 0xc0029040c0 0xc0029040d8] [0xc0029040b8 0xc0029040d0] [0xba6c50 0xba6c50] 0xc00283a600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:14.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:14.824: INFO: rc: 1
Feb 20 15:08:14.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007499b0 exit status 1   true [0xc0029040e0 0xc0029040f8 0xc002904110] [0xc0029040e0 0xc0029040f8 0xc002904110] [0xc0029040f0 0xc002904108] [0xba6c50 0xba6c50] 0xc00283aba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:24.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:25.042: INFO: rc: 1
Feb 20 15:08:25.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749ad0 exit status 1   true [0xc002904118 0xc002904160 0xc0029041a8] [0xc002904118 0xc002904160 0xc0029041a8] [0xc002904148 0xc002904190] [0xba6c50 0xba6c50] 0xc00283b200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:35.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:35.210: INFO: rc: 1
Feb 20 15:08:35.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a330 exit status 1   true [0xc0001a0f20 0xc0001a10b0 0xc0001a1290] [0xc0001a0f20 0xc0001a10b0 0xc0001a1290] [0xc0001a1050 0xc0001a1138] [0xba6c50 0xba6c50] 0xc00262a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:45.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:45.366: INFO: rc: 1
Feb 20 15:08:45.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024940c0 exit status 1   true [0xc001634000 0xc001634038 0xc0016340e0] [0xc001634000 0xc001634038 0xc0016340e0] [0xc001634028 0xc0016340a0] [0xba6c50 0xba6c50] 0xc00191f260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:08:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:08:55.496: INFO: rc: 1
Feb 20 15:08:55.496: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002494210 exit status 1   true [0xc001634100 0xc001634160 0xc0016341e0] [0xc001634100 0xc001634160 0xc0016341e0] [0xc001634150 0xc0016341c0] [0xba6c50 0xba6c50] 0xc0016fc3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:09:05.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:09:05.694: INFO: rc: 1
Feb 20 15:09:05.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002494300 exit status 1   true [0xc001634218 0xc001634258 0xc0016342c0] [0xc001634218 0xc001634258 0xc0016342c0] [0xc001634250 0xc001634290] [0xba6c50 0xba6c50] 0xc0015821e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:09:15.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:09:15.889: INFO: rc: 1
Feb 20 15:09:15.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749500 exit status 1   true [0xc002904000 0xc002904018 0xc002904030] [0xc002904000 0xc002904018 0xc002904030] [0xc002904010 0xc002904028] [0xba6c50 0xba6c50] 0xc00191f260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:09:25.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:09:26.063: INFO: rc: 1
Feb 20 15:09:26.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a0c0 exit status 1   true [0xc0001a00a8 0xc0001a0630 0xc0001a08b0] [0xc0001a00a8 0xc0001a0630 0xc0001a08b0] [0xc0001a0558 0xc0001a0700] [0xba6c50 0xba6c50] 0xc0022e29c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:09:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:09:36.172: INFO: rc: 1
Feb 20 15:09:36.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286a180 exit status 1   true [0xc0001a0968 0xc0001a0af0 0xc0001a0e68] [0xc0001a0968 0xc0001a0af0 0xc0001a0e68] [0xc0001a0ac0 0xc0001a0d30] [0xba6c50 0xba6c50] 0xc0022e3b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 15:09:46.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 15:09:46.319: INFO: rc: 1
Feb 20 15:09:46.320: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 20 15:09:46.320: INFO: Scaling statefulset ss to 0
Feb 20 15:09:46.336: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 20 15:09:46.339: INFO: Deleting all statefulset in ns statefulset-4230
Feb 20 15:09:46.343: INFO: Scaling statefulset ss to 0
Feb 20 15:09:46.351: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 15:09:46.354: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:09:46.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4230" for this suite.
Feb 20 15:09:54.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:09:54.541: INFO: namespace statefulset-4230 deletion completed in 8.168982487s

• [SLOW TEST:369.392 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:09:54.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 20 15:09:54.631: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 20 15:09:55.240: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 20 15:09:57.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 15:09:59.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 15:10:01.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 15:10:03.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 15:10:05.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717808195, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 15:10:11.815: INFO: Waited 4.14119009s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:10:12.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-9716" for this suite.
Feb 20 15:10:18.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:10:18.899: INFO: namespace aggregator-9716 deletion completed in 6.1497016s

• [SLOW TEST:24.357 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:10:18.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 20 15:10:18.974: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7712" to be "success or failure"
Feb 20 15:10:18.981: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.56118ms
Feb 20 15:10:20.996: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022592527s
Feb 20 15:10:23.003: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029482007s
Feb 20 15:10:25.015: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040836946s
Feb 20 15:10:27.021: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04727084s
Feb 20 15:10:29.028: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05398948s
Feb 20 15:10:31.036: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062374467s
STEP: Saw pod success
Feb 20 15:10:31.036: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 20 15:10:31.038: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 20 15:10:31.094: INFO: Waiting for pod pod-host-path-test to disappear
Feb 20 15:10:31.100: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:10:31.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7712" for this suite.
Feb 20 15:10:37.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:10:37.276: INFO: namespace hostpath-7712 deletion completed in 6.171901469s

• [SLOW TEST:18.377 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:10:37.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 20 15:10:47.144: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:10:47.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5630" for this suite.
Feb 20 15:10:53.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:10:53.541: INFO: namespace container-runtime-5630 deletion completed in 6.170426953s

• [SLOW TEST:16.264 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 20 15:10:53.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 20 15:10:53.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f" in namespace "projected-2741" to be "success or failure"
Feb 20 15:10:53.722: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.723837ms
Feb 20 15:10:55.735: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023918308s
Feb 20 15:10:57.740: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028663984s
Feb 20 15:10:59.749: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037112794s
Feb 20 15:11:01.764: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052168421s
Feb 20 15:11:03.773: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061229269s
STEP: Saw pod success
Feb 20 15:11:03.773: INFO: Pod "downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f" satisfied condition "success or failure"
Feb 20 15:11:03.776: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f container client-container: 
STEP: delete the pod
Feb 20 15:11:04.170: INFO: Waiting for pod downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f to disappear
Feb 20 15:11:04.507: INFO: Pod downwardapi-volume-2117cbab-9980-4bb3-9e59-d531449b562f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 20 15:11:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2741" for this suite.
Feb 20 15:11:10.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 15:11:10.740: INFO: namespace projected-2741 deletion completed in 6.209582844s

• [SLOW TEST:17.199 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSFeb 20 15:11:10.740: INFO: Running AfterSuite actions on all nodes
Feb 20 15:11:10.741: INFO: Running AfterSuite actions on node 1
Feb 20 15:11:10.741: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8111.831 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8112.06s)
FAIL