I0607 12:55:54.666594 6 e2e.go:243] Starting e2e run "c47f29a4-0a06-4452-bdd7-01d332ca5e07" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591534553 - Will randomize all specs Will run 215 of 4412 specs Jun 7 12:55:54.859: INFO: >>> kubeConfig: /root/.kube/config Jun 7 12:55:54.861: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 7 12:55:54.881: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 7 12:55:54.919: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 7 12:55:54.919: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 7 12:55:54.919: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 7 12:55:54.925: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 7 12:55:54.925: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 7 12:55:54.925: INFO: e2e test version: v1.15.11 Jun 7 12:55:54.926: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:55:54.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Jun 7 12:55:54.998: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wtd9f in namespace proxy-8083 I0607 12:55:55.035742 6 runners.go:180] Created replication controller with name: proxy-service-wtd9f, namespace: proxy-8083, replica count: 1 I0607 12:55:56.086103 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 12:55:57.086287 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 12:55:58.086481 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 12:55:59.086708 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:00.086928 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:01.087149 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:02.087373 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:03.087663 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:04.087873 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:05.088081 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:06.088294 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:07.088531 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 12:56:08.088726 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 12:56:08.091: INFO: setup took 13.091498356s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 14.294555ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 14.419559ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 14.336906ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 14.263606ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 14.304933ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 14.54656ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 14.432847ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 14.415312ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 14.559081ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 14.754165ms) Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 14.67665ms) Jun 7 12:56:08.107: INFO: (0) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 5.075664ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 5.088229ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 5.087361ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 5.161191ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.318957ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 5.204575ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.438021ms) Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 3.41039ms) Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.480137ms) Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 3.638336ms) Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.63877ms) Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.954991ms) Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.909699ms) Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 2.958714ms) Jun 7 12:56:08.132: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.442195ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.62088ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.740661ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.678898ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.766225ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.683811ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.741634ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.272658ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 5.2176ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.253684ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 5.234115ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.244089ms) Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 5.322934ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.364981ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.338765ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.35214ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.368158ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.523301ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.797127ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.835729ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.905406ms) Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.843329ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.827786ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.844706ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.986596ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.779049ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.76615ms) Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.813339ms) Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.716777ms) Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.747683ms) Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.943076ms) Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.908783ms) Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.428086ms) Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.599704ms) Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.616336ms) Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.711316ms) Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.846257ms) Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 7.248098ms) Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.338346ms) Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 7.553293ms) Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 7.673184ms) Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 7.671617ms) Jun 7 12:56:08.149: INFO: (5) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 8.047127ms) Jun 7 12:56:08.152: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.263496ms) Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.274686ms) Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.291162ms) Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.4605ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.870924ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.921789ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 5.001634ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.89425ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.014394ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.07202ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 5.123601ms) Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 6.612245ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.648766ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.646478ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 6.601721ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.801028ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 6.685024ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 6.758521ms) Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 6.726696ms) Jun 7 12:56:08.162: INFO: (7) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 7.847549ms) Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 7.994667ms) Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.963576ms) Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 8.161204ms) Jun 7 12:56:08.166: INFO: (8) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.576321ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.705946ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.799086ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.872229ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.973839ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.070278ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.200508ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.118761ms) Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.676635ms) Jun 7 12:56:08.168: INFO: (8) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.604996ms) Jun 7 12:56:08.168: INFO: (8) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.767775ms) Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.923995ms) Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.937345ms) Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.940911ms) Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.14931ms) Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.706424ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.855805ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 6.245777ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 6.443636ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.510533ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.491924ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 6.586426ms) Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 6.751688ms) Jun 7 12:56:08.175: INFO: (9) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.345203ms) Jun 7 12:56:08.175: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.022781ms) Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.019609ms) Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.132132ms) Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.677617ms) Jun 7 12:56:08.183: INFO: (10) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.708304ms) Jun 7 12:56:08.183: INFO: (10) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.793726ms) Jun 7 12:56:08.184: INFO: (10) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 5.598005ms) Jun 7 12:56:08.184: INFO: (10) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.669908ms) Jun 7 12:56:08.185: INFO: (10) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 6.563844ms) Jun 7 12:56:08.187: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.070846ms) Jun 7 12:56:08.189: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.257881ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.488187ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.816534ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.813072ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.061898ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.097203ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.136748ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.271406ms) Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 5.250424ms) Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.569098ms) Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 5.91823ms) Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.277133ms) Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.10851ms) Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.228365ms) Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.551512ms) Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.704762ms) Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 5.835684ms) Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 5.898817ms) Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 6.346306ms) Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 6.374366ms) Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 6.367306ms) Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 6.424711ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.707469ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.04856ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.048703ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.069012ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.122172ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.0442ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.073176ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.150494ms) Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.342351ms) Jun 7 12:56:08.203: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.243725ms) Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.245282ms) Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.211484ms) Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.311728ms) Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.254735ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.502458ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.662329ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.69496ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.707799ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.783938ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.853318ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.808531ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.835401ms) Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.824616ms) Jun 7 12:56:08.210: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.066415ms) Jun 7 12:56:08.211: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.269219ms) Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.130355ms) Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.240534ms) Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.366716ms) Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.336004ms) Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.417249ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.719832ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 5.763953ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 6.043116ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.990662ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 6.096384ms) Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 1.753793ms) Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.183039ms) Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.129581ms) Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.33943ms) Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 3.608428ms) Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.765358ms) Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.442759ms) Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.5452ms) Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 2.522368ms) Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.680419ms) Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 2.677004ms) Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.195523ms) Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.237775ms) Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 3.581772ms) Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.552145ms) Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.66691ms) Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.787423ms) Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.895672ms) Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.867006ms) Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.203551ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.193249ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.199757ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 3.482414ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.650634ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.61776ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.633919ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 3.671769ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.74882ms) Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 3.946626ms) Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.452646ms) Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.553592ms) Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.820356ms) Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.814681ms) Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.876142ms) Jun 7 12:56:08.231: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 1.906574ms) Jun 7 12:56:08.232: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 2.626739ms) Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 2.928885ms) Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.387938ms) Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.447009ms) Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.480744ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.037225ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.026984ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.036166ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.156089ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.180305ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.133525ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.187643ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.153328ms) Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.130927ms) STEP: deleting ReplicationController proxy-service-wtd9f in namespace proxy-8083, will wait for the garbage collector to delete the pods Jun 7 12:56:08.292: INFO: Deleting ReplicationController proxy-service-wtd9f took: 6.397701ms Jun 7 12:56:08.592: INFO: Terminating ReplicationController proxy-service-wtd9f pods took: 300.287644ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:56:22.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8083" for this suite. Jun 7 12:56:28.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:56:28.330: INFO: namespace proxy-8083 deletion completed in 6.111498505s • [SLOW TEST:33.404 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:56:28.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 7 12:56:28.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1647' Jun 7 12:56:31.745: INFO: stderr: "" Jun 7 12:56:31.745: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 7 12:56:32.769: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:32.769: INFO: Found 0 / 1 Jun 7 12:56:33.750: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:33.750: INFO: Found 0 / 1 Jun 7 12:56:34.749: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:34.749: INFO: Found 0 / 1 Jun 7 12:56:35.751: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:35.751: INFO: Found 1 / 1 Jun 7 12:56:35.751: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 7 12:56:35.754: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:35.754: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 12:56:35.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4c4zn --namespace=kubectl-1647 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 7 12:56:35.867: INFO: stderr: "" Jun 7 12:56:35.867: INFO: stdout: "pod/redis-master-4c4zn patched\n" STEP: checking annotations Jun 7 12:56:35.870: INFO: Selector matched 1 pods for map[app:redis] Jun 7 12:56:35.870: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:56:35.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1647" for this suite. Jun 7 12:56:57.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:56:57.976: INFO: namespace kubectl-1647 deletion completed in 22.104072433s • [SLOW TEST:29.646 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:56:57.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1e378fbf-df4d-45f2-bf15-2cf7d906f465 STEP: Creating a pod to test consume configMaps Jun 7 12:56:58.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2" in namespace "configmap-5412" to be "success or failure" Jun 7 12:56:58.126: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.752025ms Jun 7 12:57:00.131: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015249902s Jun 7 12:57:02.135: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019247762s STEP: Saw pod success Jun 7 12:57:02.135: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2" satisfied condition "success or failure" Jun 7 12:57:02.137: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 container configmap-volume-test: STEP: delete the pod Jun 7 12:57:02.171: INFO: Waiting for pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 to disappear Jun 7 12:57:02.180: INFO: Pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:57:02.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5412" for this suite. Jun 7 12:57:08.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:57:08.276: INFO: namespace configmap-5412 deletion completed in 6.093307311s • [SLOW TEST:10.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:57:08.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 7 12:57:08.357: INFO: Waiting up to 5m0s for pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc" in namespace "emptydir-7687" to be "success or failure" Jun 7 12:57:08.370: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.856038ms Jun 7 12:57:10.375: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017439834s Jun 7 12:57:12.379: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021828655s STEP: Saw pod success Jun 7 12:57:12.379: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc" satisfied condition "success or failure" Jun 7 12:57:12.382: INFO: Trying to get logs from node iruya-worker pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc container test-container: STEP: delete the pod Jun 7 12:57:12.402: INFO: Waiting for pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc to disappear Jun 7 12:57:12.434: INFO: Pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:57:12.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7687" for this suite. Jun 7 12:57:18.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:57:18.533: INFO: namespace emptydir-7687 deletion completed in 6.095039204s • [SLOW TEST:10.256 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:57:18.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d6a926ee-7a00-48a7-9d15-4c7260a24278 STEP: Creating a pod to test consume configMaps Jun 7 12:57:18.613: INFO: Waiting up to 5m0s for pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366" in namespace "configmap-9615" to be "success or failure" Jun 7 12:57:18.617: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Pending", Reason="", readiness=false. Elapsed: 3.325001ms Jun 7 12:57:20.621: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007019076s Jun 7 12:57:22.624: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010683778s STEP: Saw pod success Jun 7 12:57:22.624: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366" satisfied condition "success or failure" Jun 7 12:57:22.627: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 container configmap-volume-test: STEP: delete the pod Jun 7 12:57:22.742: INFO: Waiting for pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 to disappear Jun 7 12:57:22.755: INFO: Pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:57:22.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9615" for this suite. Jun 7 12:57:28.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:57:28.885: INFO: namespace configmap-9615 deletion completed in 6.123453711s • [SLOW TEST:10.352 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:57:28.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-986226df-648c-4226-8bb4-caaf1d72a570 STEP: Creating secret with name s-test-opt-upd-6ddaf328-98ee-4508-a392-931fd9f93c3b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-986226df-648c-4226-8bb4-caaf1d72a570 STEP: Updating secret s-test-opt-upd-6ddaf328-98ee-4508-a392-931fd9f93c3b STEP: Creating secret with name s-test-opt-create-d9e98acf-f3d4-420a-b673-efdd4b782174 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:59:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-257" for this suite. Jun 7 12:59:23.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:59:23.647: INFO: namespace projected-257 deletion completed in 22.108805012s • [SLOW TEST:114.762 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:59:23.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f8367e36-35a3-4456-a55e-774f4fc2cf67 STEP: Creating a pod to test consume configMaps Jun 7 12:59:23.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3" in namespace "configmap-6132" to be "success or failure" Jun 7 12:59:23.715: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449369ms Jun 7 12:59:25.728: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015938832s Jun 7 12:59:27.734: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02127956s STEP: Saw pod success Jun 7 12:59:27.734: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3" satisfied condition "success or failure" Jun 7 12:59:27.737: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 container configmap-volume-test: STEP: delete the pod Jun 7 12:59:27.917: INFO: Waiting for pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 to disappear Jun 7 12:59:27.955: INFO: Pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:59:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6132" for this suite. Jun 7 12:59:33.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:59:34.068: INFO: namespace configmap-6132 deletion completed in 6.109275322s • [SLOW TEST:10.421 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:59:34.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 7 12:59:34.101: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 12:59:39.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-211" for this suite. Jun 7 12:59:45.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 12:59:45.720: INFO: namespace init-container-211 deletion completed in 6.092655958s • [SLOW TEST:11.652 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 12:59:45.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-7k2q STEP: Creating a pod to test atomic-volume-subpath Jun 7 12:59:45.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7k2q" in namespace "subpath-8022" to be "success or failure" Jun 7 12:59:45.811: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1378ms Jun 7 12:59:47.815: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00685614s Jun 7 12:59:49.819: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 4.010943969s Jun 7 12:59:51.824: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 6.015524345s Jun 7 12:59:53.828: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 8.01991417s Jun 7 12:59:55.832: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 10.024240622s Jun 7 12:59:57.837: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 12.028560762s Jun 7 12:59:59.841: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 14.03327958s Jun 7 13:00:01.846: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 16.037505715s Jun 7 13:00:03.849: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 18.041209942s Jun 7 13:00:05.854: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 20.045464835s Jun 7 13:00:07.858: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 22.049926142s Jun 7 13:00:09.862: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053591642s STEP: Saw pod success Jun 7 13:00:09.862: INFO: Pod "pod-subpath-test-secret-7k2q" satisfied condition "success or failure" Jun 7 13:00:09.864: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-7k2q container test-container-subpath-secret-7k2q: STEP: delete the pod Jun 7 13:00:10.048: INFO: Waiting for pod pod-subpath-test-secret-7k2q to disappear Jun 7 13:00:10.197: INFO: Pod pod-subpath-test-secret-7k2q no longer exists STEP: Deleting pod pod-subpath-test-secret-7k2q Jun 7 13:00:10.197: INFO: Deleting pod "pod-subpath-test-secret-7k2q" in namespace "subpath-8022" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:00:10.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8022" for this suite. Jun 7 13:00:16.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:00:16.361: INFO: namespace subpath-8022 deletion completed in 6.133384752s • [SLOW TEST:30.641 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:00:16.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8948 I0607 13:00:16.409621 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8948, replica count: 1 I0607 13:00:17.459999 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 13:00:18.460232 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 13:00:19.460423 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 13:00:20.460656 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 13:00:21.460955 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 13:00:21.612: INFO: Created: latency-svc-lvr2f Jun 7 13:00:21.642: INFO: Got endpoints: latency-svc-lvr2f [81.599607ms] Jun 7 13:00:21.731: INFO: Created: latency-svc-kgkwf Jun 7 13:00:21.736: INFO: Got endpoints: latency-svc-kgkwf [93.427962ms] Jun 7 13:00:21.804: INFO: Created: latency-svc-fgzbn Jun 7 13:00:21.821: INFO: Got endpoints: latency-svc-fgzbn [178.119879ms] Jun 7 13:00:21.905: INFO: Created: latency-svc-7kq49 Jun 7 13:00:21.938: INFO: Got endpoints: latency-svc-7kq49 [295.994591ms] Jun 7 13:00:21.969: INFO: Created: latency-svc-2hv58 Jun 7 13:00:21.982: INFO: Got endpoints: latency-svc-2hv58 [339.741627ms] Jun 7 13:00:22.048: INFO: Created: latency-svc-6h8tp Jun 7 13:00:22.055: INFO: Got endpoints: latency-svc-6h8tp [412.467775ms] Jun 7 13:00:22.074: INFO: Created: latency-svc-t8nbm Jun 7 13:00:22.085: INFO: Got endpoints: latency-svc-t8nbm [442.283414ms] Jun 7 13:00:22.104: INFO: Created: latency-svc-7cm8j Jun 7 13:00:22.130: INFO: Got endpoints: latency-svc-7cm8j [487.771226ms] Jun 7 13:00:22.210: INFO: Created: latency-svc-vk2ww Jun 7 13:00:22.213: INFO: Got endpoints: latency-svc-vk2ww [570.479582ms] Jun 7 13:00:22.272: INFO: Created: latency-svc-9zf58 Jun 7 13:00:22.284: INFO: Got endpoints: latency-svc-9zf58 [640.946806ms] Jun 7 13:00:22.302: INFO: Created: latency-svc-2q5g8 Jun 7 13:00:22.349: INFO: Got endpoints: latency-svc-2q5g8 [706.461337ms] Jun 7 13:00:22.368: INFO: Created: latency-svc-bdp9n Jun 7 13:00:22.381: INFO: Got endpoints: latency-svc-bdp9n [738.119249ms] Jun 7 13:00:22.401: INFO: Created: latency-svc-svgwg Jun 7 13:00:22.417: INFO: Got endpoints: latency-svc-svgwg [774.210334ms] Jun 7 13:00:22.485: INFO: Created: latency-svc-psmtq Jun 7 13:00:22.495: INFO: Got endpoints: latency-svc-psmtq [852.011299ms] Jun 7 13:00:22.515: INFO: Created: latency-svc-8k4l9 Jun 7 13:00:22.526: INFO: Got endpoints: latency-svc-8k4l9 [882.817794ms] Jun 7 13:00:22.543: INFO: Created: latency-svc-pwdcf Jun 7 13:00:22.556: INFO: Got endpoints: latency-svc-pwdcf [913.087523ms] Jun 7 13:00:22.572: INFO: Created: latency-svc-crzlg Jun 7 13:00:22.628: INFO: Got endpoints: latency-svc-crzlg [892.234623ms] Jun 7 13:00:22.630: INFO: Created: latency-svc-zz45t Jun 7 13:00:22.647: INFO: Got endpoints: latency-svc-zz45t [825.747455ms] Jun 7 13:00:22.700: INFO: Created: latency-svc-57fn2 Jun 7 13:00:22.719: INFO: Got endpoints: latency-svc-57fn2 [780.040899ms] Jun 7 13:00:22.808: INFO: Created: latency-svc-bgqvs Jun 7 13:00:22.815: INFO: Got endpoints: latency-svc-bgqvs [832.536949ms] Jun 7 13:00:22.866: INFO: Created: latency-svc-rth8k Jun 7 13:00:22.898: INFO: Got endpoints: latency-svc-rth8k [842.781711ms] Jun 7 13:00:22.947: INFO: Created: latency-svc-hn4qs Jun 7 13:00:22.964: INFO: Got endpoints: latency-svc-hn4qs [878.779208ms] Jun 7 13:00:22.992: INFO: Created: latency-svc-thb87 Jun 7 13:00:23.009: INFO: Got endpoints: latency-svc-thb87 [878.359359ms] Jun 7 13:00:23.028: INFO: Created: latency-svc-ds29d Jun 7 13:00:23.044: INFO: Got endpoints: latency-svc-ds29d [830.689763ms] Jun 7 13:00:23.091: INFO: Created: latency-svc-9skhd Jun 7 13:00:23.098: INFO: Got endpoints: latency-svc-9skhd [814.819994ms] Jun 7 13:00:23.126: INFO: Created: latency-svc-87n4c Jun 7 13:00:23.140: INFO: Got endpoints: latency-svc-87n4c [791.190619ms] Jun 7 13:00:23.180: INFO: Created: latency-svc-vb94z Jun 7 13:00:23.221: INFO: Got endpoints: latency-svc-vb94z [840.217768ms] Jun 7 13:00:23.280: INFO: Created: latency-svc-7xdwt Jun 7 13:00:23.291: INFO: Got endpoints: latency-svc-7xdwt [873.898234ms] Jun 7 13:00:23.359: INFO: Created: latency-svc-22nzd Jun 7 13:00:23.385: INFO: Created: latency-svc-8qfbh Jun 7 13:00:23.385: INFO: Got endpoints: latency-svc-22nzd [889.752504ms] Jun 7 13:00:23.408: INFO: Got endpoints: latency-svc-8qfbh [882.466242ms] Jun 7 13:00:23.438: INFO: Created: latency-svc-bzbcn Jun 7 13:00:23.497: INFO: Got endpoints: latency-svc-bzbcn [941.031289ms] Jun 7 13:00:23.507: INFO: Created: latency-svc-77p82 Jun 7 13:00:23.520: INFO: Got endpoints: latency-svc-77p82 [892.17056ms] Jun 7 13:00:23.538: INFO: Created: latency-svc-lld2d Jun 7 13:00:23.551: INFO: Got endpoints: latency-svc-lld2d [904.083966ms] Jun 7 13:00:23.570: INFO: Created: latency-svc-jxxwd Jun 7 13:00:23.581: INFO: Got endpoints: latency-svc-jxxwd [862.552973ms] Jun 7 13:00:23.642: INFO: Created: latency-svc-wggjg Jun 7 13:00:23.644: INFO: Got endpoints: latency-svc-wggjg [828.835733ms] Jun 7 13:00:23.672: INFO: Created: latency-svc-95jmz Jun 7 13:00:23.684: INFO: Got endpoints: latency-svc-95jmz [786.483925ms] Jun 7 13:00:23.707: INFO: Created: latency-svc-g2n2d Jun 7 13:00:23.720: INFO: Got endpoints: latency-svc-g2n2d [756.542693ms] Jun 7 13:00:23.779: INFO: Created: latency-svc-c8s67 Jun 7 13:00:23.782: INFO: Got endpoints: latency-svc-c8s67 [772.990158ms] Jun 7 13:00:23.808: INFO: Created: latency-svc-6lj7q Jun 7 13:00:23.829: INFO: Got endpoints: latency-svc-6lj7q [784.905623ms] Jun 7 13:00:23.870: INFO: Created: latency-svc-tfhrd Jun 7 13:00:23.916: INFO: Got endpoints: latency-svc-tfhrd [817.287212ms] Jun 7 13:00:23.917: INFO: Created: latency-svc-5bc47 Jun 7 13:00:23.931: INFO: Got endpoints: latency-svc-5bc47 [790.720087ms] Jun 7 13:00:23.950: INFO: Created: latency-svc-fqrwn Jun 7 13:00:23.980: INFO: Got endpoints: latency-svc-fqrwn [758.597645ms] Jun 7 13:00:24.066: INFO: Created: latency-svc-vh8xb Jun 7 13:00:24.071: INFO: Got endpoints: latency-svc-vh8xb [780.402281ms] Jun 7 13:00:24.108: INFO: Created: latency-svc-cw7nf Jun 7 13:00:24.121: INFO: Got endpoints: latency-svc-cw7nf [736.031954ms] Jun 7 13:00:24.135: INFO: Created: latency-svc-88lm2 Jun 7 13:00:24.148: INFO: Got endpoints: latency-svc-88lm2 [740.336679ms] Jun 7 13:00:24.166: INFO: Created: latency-svc-wbr7j Jun 7 13:00:24.228: INFO: Got endpoints: latency-svc-wbr7j [730.572061ms] Jun 7 13:00:24.234: INFO: Created: latency-svc-gtxqx Jun 7 13:00:24.251: INFO: Got endpoints: latency-svc-gtxqx [730.790866ms] Jun 7 13:00:24.288: INFO: Created: latency-svc-xnbcs Jun 7 13:00:24.300: INFO: Got endpoints: latency-svc-xnbcs [748.833835ms] Jun 7 13:00:24.316: INFO: Created: latency-svc-pfkdc Jun 7 13:00:24.359: INFO: Got endpoints: latency-svc-pfkdc [777.498947ms] Jun 7 13:00:24.369: INFO: Created: latency-svc-l7xzm Jun 7 13:00:24.385: INFO: Got endpoints: latency-svc-l7xzm [740.762379ms] Jun 7 13:00:24.412: INFO: Created: latency-svc-6llkg Jun 7 13:00:24.427: INFO: Got endpoints: latency-svc-6llkg [741.917776ms] Jun 7 13:00:24.455: INFO: Created: latency-svc-8nm4b Jun 7 13:00:24.497: INFO: Got endpoints: latency-svc-8nm4b [776.373329ms] Jun 7 13:00:24.504: INFO: Created: latency-svc-l7bx5 Jun 7 13:00:24.534: INFO: Got endpoints: latency-svc-l7bx5 [752.085508ms] Jun 7 13:00:24.555: INFO: Created: latency-svc-245fz Jun 7 13:00:24.585: INFO: Got endpoints: latency-svc-245fz [756.529129ms] Jun 7 13:00:24.647: INFO: Created: latency-svc-xdgzs Jun 7 13:00:24.662: INFO: Got endpoints: latency-svc-xdgzs [745.891561ms] Jun 7 13:00:24.734: INFO: Created: latency-svc-t5dsz Jun 7 13:00:24.844: INFO: Got endpoints: latency-svc-t5dsz [912.632379ms] Jun 7 13:00:24.849: INFO: Created: latency-svc-dbnsn Jun 7 13:00:24.872: INFO: Got endpoints: latency-svc-dbnsn [892.306691ms] Jun 7 13:00:24.912: INFO: Created: latency-svc-7zgzf Jun 7 13:00:24.926: INFO: Got endpoints: latency-svc-7zgzf [854.730209ms] Jun 7 13:00:24.994: INFO: Created: latency-svc-qgfq9 Jun 7 13:00:24.998: INFO: Got endpoints: latency-svc-qgfq9 [877.573751ms] Jun 7 13:00:25.023: INFO: Created: latency-svc-966g4 Jun 7 13:00:25.035: INFO: Got endpoints: latency-svc-966g4 [886.243593ms] Jun 7 13:00:25.066: INFO: Created: latency-svc-q9r9r Jun 7 13:00:25.077: INFO: Got endpoints: latency-svc-q9r9r [849.837162ms] Jun 7 13:00:25.132: INFO: Created: latency-svc-wc88h Jun 7 13:00:25.138: INFO: Got endpoints: latency-svc-wc88h [886.331012ms] Jun 7 13:00:25.158: INFO: Created: latency-svc-prc4k Jun 7 13:00:25.174: INFO: Got endpoints: latency-svc-prc4k [874.566876ms] Jun 7 13:00:25.206: INFO: Created: latency-svc-wkt9t Jun 7 13:00:25.222: INFO: Got endpoints: latency-svc-wkt9t [863.614458ms] Jun 7 13:00:25.283: INFO: Created: latency-svc-6rlmg Jun 7 13:00:25.299: INFO: Got endpoints: latency-svc-6rlmg [914.513381ms] Jun 7 13:00:25.331: INFO: Created: latency-svc-lkkzg Jun 7 13:00:25.343: INFO: Got endpoints: latency-svc-lkkzg [916.114404ms] Jun 7 13:00:25.416: INFO: Created: latency-svc-9ml2r Jun 7 13:00:25.433: INFO: Got endpoints: latency-svc-9ml2r [936.216656ms] Jun 7 13:00:25.456: INFO: Created: latency-svc-twks6 Jun 7 13:00:25.472: INFO: Got endpoints: latency-svc-twks6 [938.426695ms] Jun 7 13:00:25.498: INFO: Created: latency-svc-m6zn8 Jun 7 13:00:25.568: INFO: Got endpoints: latency-svc-m6zn8 [982.901041ms] Jun 7 13:00:25.571: INFO: Created: latency-svc-tbpsm Jun 7 13:00:25.579: INFO: Got endpoints: latency-svc-tbpsm [917.32374ms] Jun 7 13:00:25.595: INFO: Created: latency-svc-8b7mt Jun 7 13:00:25.609: INFO: Got endpoints: latency-svc-8b7mt [764.786115ms] Jun 7 13:00:25.627: INFO: Created: latency-svc-nbnnf Jun 7 13:00:25.639: INFO: Got endpoints: latency-svc-nbnnf [766.640602ms] Jun 7 13:00:25.656: INFO: Created: latency-svc-p49qs Jun 7 13:00:25.718: INFO: Got endpoints: latency-svc-p49qs [792.223817ms] Jun 7 13:00:25.737: INFO: Created: latency-svc-vxnmx Jun 7 13:00:25.754: INFO: Got endpoints: latency-svc-vxnmx [755.732654ms] Jun 7 13:00:25.774: INFO: Created: latency-svc-qvg6d Jun 7 13:00:25.790: INFO: Got endpoints: latency-svc-qvg6d [755.115552ms] Jun 7 13:00:25.812: INFO: Created: latency-svc-rxmnn Jun 7 13:00:25.856: INFO: Got endpoints: latency-svc-rxmnn [778.373761ms] Jun 7 13:00:25.884: INFO: Created: latency-svc-gv8mk Jun 7 13:00:25.899: INFO: Got endpoints: latency-svc-gv8mk [760.843136ms] Jun 7 13:00:25.917: INFO: Created: latency-svc-mxdvq Jun 7 13:00:25.929: INFO: Got endpoints: latency-svc-mxdvq [754.500666ms] Jun 7 13:00:26.007: INFO: Created: latency-svc-djz75 Jun 7 13:00:26.010: INFO: Got endpoints: latency-svc-djz75 [787.463545ms] Jun 7 13:00:26.038: INFO: Created: latency-svc-lhstv Jun 7 13:00:26.065: INFO: Got endpoints: latency-svc-lhstv [765.387038ms] Jun 7 13:00:26.107: INFO: Created: latency-svc-bll96 Jun 7 13:00:26.151: INFO: Got endpoints: latency-svc-bll96 [807.80164ms] Jun 7 13:00:26.163: INFO: Created: latency-svc-rmf9b Jun 7 13:00:26.176: INFO: Got endpoints: latency-svc-rmf9b [742.885899ms] Jun 7 13:00:26.199: INFO: Created: latency-svc-qx9sm Jun 7 13:00:26.212: INFO: Got endpoints: latency-svc-qx9sm [739.827777ms] Jun 7 13:00:26.229: INFO: Created: latency-svc-rzkjg Jun 7 13:00:26.243: INFO: Got endpoints: latency-svc-rzkjg [674.289434ms] Jun 7 13:00:26.288: INFO: Created: latency-svc-hfmhh Jun 7 13:00:26.309: INFO: Got endpoints: latency-svc-hfmhh [730.123782ms] Jun 7 13:00:26.328: INFO: Created: latency-svc-zvlbm Jun 7 13:00:26.339: INFO: Got endpoints: latency-svc-zvlbm [730.420718ms] Jun 7 13:00:26.355: INFO: Created: latency-svc-cczrt Jun 7 13:00:26.370: INFO: Got endpoints: latency-svc-cczrt [730.738777ms] Jun 7 13:00:26.431: INFO: Created: latency-svc-8tjh9 Jun 7 13:00:26.442: INFO: Got endpoints: latency-svc-8tjh9 [723.234985ms] Jun 7 13:00:26.463: INFO: Created: latency-svc-jvbtn Jun 7 13:00:26.478: INFO: Got endpoints: latency-svc-jvbtn [724.107453ms] Jun 7 13:00:26.502: INFO: Created: latency-svc-9x2xb Jun 7 13:00:26.515: INFO: Got endpoints: latency-svc-9x2xb [724.672219ms] Jun 7 13:00:26.588: INFO: Created: latency-svc-lgnhq Jun 7 13:00:26.607: INFO: Got endpoints: latency-svc-lgnhq [750.576287ms] Jun 7 13:00:26.607: INFO: Created: latency-svc-ckfgl Jun 7 13:00:26.623: INFO: Got endpoints: latency-svc-ckfgl [724.778906ms] Jun 7 13:00:26.643: INFO: Created: latency-svc-m9n7z Jun 7 13:00:26.660: INFO: Got endpoints: latency-svc-m9n7z [730.791456ms] Jun 7 13:00:26.736: INFO: Created: latency-svc-6cgrh Jun 7 13:00:26.744: INFO: Got endpoints: latency-svc-6cgrh [733.877628ms] Jun 7 13:00:26.781: INFO: Created: latency-svc-tlph5 Jun 7 13:00:26.798: INFO: Got endpoints: latency-svc-tlph5 [733.619923ms] Jun 7 13:00:26.881: INFO: Created: latency-svc-drr69 Jun 7 13:00:26.885: INFO: Got endpoints: latency-svc-drr69 [733.922561ms] Jun 7 13:00:26.909: INFO: Created: latency-svc-d8wzb Jun 7 13:00:26.925: INFO: Got endpoints: latency-svc-d8wzb [748.410304ms] Jun 7 13:00:26.975: INFO: Created: latency-svc-jpzsk Jun 7 13:00:27.018: INFO: Got endpoints: latency-svc-jpzsk [805.413277ms] Jun 7 13:00:27.039: INFO: Created: latency-svc-2lwsx Jun 7 13:00:27.051: INFO: Got endpoints: latency-svc-2lwsx [808.368579ms] Jun 7 13:00:27.069: INFO: Created: latency-svc-fkm9p Jun 7 13:00:27.082: INFO: Got endpoints: latency-svc-fkm9p [772.220512ms] Jun 7 13:00:27.100: INFO: Created: latency-svc-h59jh Jun 7 13:00:27.112: INFO: Got endpoints: latency-svc-h59jh [772.51669ms] Jun 7 13:00:27.180: INFO: Created: latency-svc-k2gvd Jun 7 13:00:27.197: INFO: Got endpoints: latency-svc-k2gvd [827.571347ms] Jun 7 13:00:27.227: INFO: Created: latency-svc-8hdmd Jun 7 13:00:27.239: INFO: Got endpoints: latency-svc-8hdmd [797.080268ms] Jun 7 13:00:27.347: INFO: Created: latency-svc-xp4vj Jun 7 13:00:27.350: INFO: Got endpoints: latency-svc-xp4vj [871.477678ms] Jun 7 13:00:27.426: INFO: Created: latency-svc-jzdkn Jun 7 13:00:27.437: INFO: Got endpoints: latency-svc-jzdkn [922.014605ms] Jun 7 13:00:27.485: INFO: Created: latency-svc-6k6m4 Jun 7 13:00:27.491: INFO: Got endpoints: latency-svc-6k6m4 [884.369809ms] Jun 7 13:00:27.513: INFO: Created: latency-svc-xhcnd Jun 7 13:00:27.528: INFO: Got endpoints: latency-svc-xhcnd [904.142453ms] Jun 7 13:00:27.550: INFO: Created: latency-svc-nkmjt Jun 7 13:00:27.564: INFO: Got endpoints: latency-svc-nkmjt [904.488711ms] Jun 7 13:00:27.635: INFO: Created: latency-svc-d4dvc Jun 7 13:00:27.648: INFO: Got endpoints: latency-svc-d4dvc [904.09601ms] Jun 7 13:00:27.671: INFO: Created: latency-svc-xrngm Jun 7 13:00:27.684: INFO: Got endpoints: latency-svc-xrngm [886.179963ms] Jun 7 13:00:27.701: INFO: Created: latency-svc-b9rtg Jun 7 13:00:27.715: INFO: Got endpoints: latency-svc-b9rtg [830.187819ms] Jun 7 13:00:27.732: INFO: Created: latency-svc-vt75j Jun 7 13:00:27.772: INFO: Got endpoints: latency-svc-vt75j [847.600251ms] Jun 7 13:00:27.783: INFO: Created: latency-svc-gt4xs Jun 7 13:00:27.813: INFO: Got endpoints: latency-svc-gt4xs [794.66138ms] Jun 7 13:00:27.837: INFO: Created: latency-svc-2r9tv Jun 7 13:00:27.946: INFO: Got endpoints: latency-svc-2r9tv [894.436607ms] Jun 7 13:00:27.990: INFO: Created: latency-svc-qdh6p Jun 7 13:00:28.048: INFO: Got endpoints: latency-svc-qdh6p [966.050422ms] Jun 7 13:00:28.055: INFO: Created: latency-svc-4t48k Jun 7 13:00:28.058: INFO: Got endpoints: latency-svc-4t48k [946.444416ms] Jun 7 13:00:28.113: INFO: Created: latency-svc-6sv2m Jun 7 13:00:28.119: INFO: Got endpoints: latency-svc-6sv2m [921.363202ms] Jun 7 13:00:28.143: INFO: Created: latency-svc-c7w8r Jun 7 13:00:28.191: INFO: Got endpoints: latency-svc-c7w8r [952.309291ms] Jun 7 13:00:28.217: INFO: Created: latency-svc-qd99z Jun 7 13:00:28.234: INFO: Got endpoints: latency-svc-qd99z [883.73271ms] Jun 7 13:00:28.266: INFO: Created: latency-svc-rmtdl Jun 7 13:00:28.276: INFO: Got endpoints: latency-svc-rmtdl [838.843319ms] Jun 7 13:00:28.330: INFO: Created: latency-svc-4db4l Jun 7 13:00:28.333: INFO: Got endpoints: latency-svc-4db4l [842.162285ms] Jun 7 13:00:28.359: INFO: Created: latency-svc-8pk88 Jun 7 13:00:28.372: INFO: Got endpoints: latency-svc-8pk88 [844.717141ms] Jun 7 13:00:28.389: INFO: Created: latency-svc-m2mwm Jun 7 13:00:28.415: INFO: Got endpoints: latency-svc-m2mwm [850.462909ms] Jun 7 13:00:28.473: INFO: Created: latency-svc-d2qsd Jun 7 13:00:28.475: INFO: Got endpoints: latency-svc-d2qsd [827.320647ms] Jun 7 13:00:28.523: INFO: Created: latency-svc-gmd94 Jun 7 13:00:28.544: INFO: Got endpoints: latency-svc-gmd94 [859.800129ms] Jun 7 13:00:28.569: INFO: Created: latency-svc-74pft Jun 7 13:00:28.610: INFO: Got endpoints: latency-svc-74pft [895.510437ms] Jun 7 13:00:28.619: INFO: Created: latency-svc-mndxt Jun 7 13:00:28.632: INFO: Got endpoints: latency-svc-mndxt [859.336429ms] Jun 7 13:00:28.649: INFO: Created: latency-svc-5mprl Jun 7 13:00:28.668: INFO: Got endpoints: latency-svc-5mprl [855.814671ms] Jun 7 13:00:28.697: INFO: Created: latency-svc-rzbdp Jun 7 13:00:28.736: INFO: Got endpoints: latency-svc-rzbdp [790.77278ms] Jun 7 13:00:28.755: INFO: Created: latency-svc-pls5s Jun 7 13:00:28.771: INFO: Got endpoints: latency-svc-pls5s [723.186807ms] Jun 7 13:00:28.810: INFO: Created: latency-svc-pbdfc Jun 7 13:00:28.825: INFO: Got endpoints: latency-svc-pbdfc [767.141925ms] Jun 7 13:00:28.878: INFO: Created: latency-svc-6qnnw Jun 7 13:00:28.898: INFO: Got endpoints: latency-svc-6qnnw [779.198669ms] Jun 7 13:00:28.966: INFO: Created: latency-svc-78qlk Jun 7 13:00:29.006: INFO: Got endpoints: latency-svc-78qlk [814.567007ms] Jun 7 13:00:29.020: INFO: Created: latency-svc-t7vs2 Jun 7 13:00:29.030: INFO: Got endpoints: latency-svc-t7vs2 [796.092096ms] Jun 7 13:00:29.051: INFO: Created: latency-svc-cpl7b Jun 7 13:00:29.067: INFO: Got endpoints: latency-svc-cpl7b [791.018686ms] Jun 7 13:00:29.087: INFO: Created: latency-svc-p4tq8 Jun 7 13:00:29.143: INFO: Got endpoints: latency-svc-p4tq8 [810.234125ms] Jun 7 13:00:29.169: INFO: Created: latency-svc-29cxt Jun 7 13:00:29.181: INFO: Got endpoints: latency-svc-29cxt [808.093349ms] Jun 7 13:00:29.206: INFO: Created: latency-svc-lrfq4 Jun 7 13:00:29.217: INFO: Got endpoints: latency-svc-lrfq4 [802.691558ms] Jun 7 13:00:29.235: INFO: Created: latency-svc-gvhpp Jun 7 13:00:29.311: INFO: Got endpoints: latency-svc-gvhpp [836.066798ms] Jun 7 13:00:29.315: INFO: Created: latency-svc-gw25x Jun 7 13:00:29.319: INFO: Got endpoints: latency-svc-gw25x [775.069798ms] Jun 7 13:00:29.345: INFO: Created: latency-svc-qc6gt Jun 7 13:00:29.356: INFO: Got endpoints: latency-svc-qc6gt [745.740733ms] Jun 7 13:00:29.391: INFO: Created: latency-svc-z7km5 Jun 7 13:00:29.411: INFO: Got endpoints: latency-svc-z7km5 [778.947298ms] Jun 7 13:00:29.455: INFO: Created: latency-svc-fqbj5 Jun 7 13:00:29.458: INFO: Got endpoints: latency-svc-fqbj5 [789.489447ms] Jun 7 13:00:29.495: INFO: Created: latency-svc-z9jcl Jun 7 13:00:29.507: INFO: Got endpoints: latency-svc-z9jcl [770.657082ms] Jun 7 13:00:29.525: INFO: Created: latency-svc-jdn2g Jun 7 13:00:29.549: INFO: Got endpoints: latency-svc-jdn2g [778.117141ms] Jun 7 13:00:29.599: INFO: Created: latency-svc-sxtdz Jun 7 13:00:29.602: INFO: Got endpoints: latency-svc-sxtdz [776.229321ms] Jun 7 13:00:29.643: INFO: Created: latency-svc-hlvnl Jun 7 13:00:29.652: INFO: Got endpoints: latency-svc-hlvnl [754.05273ms] Jun 7 13:00:29.669: INFO: Created: latency-svc-mt4hh Jun 7 13:00:29.682: INFO: Got endpoints: latency-svc-mt4hh [676.359531ms] Jun 7 13:00:29.737: INFO: Created: latency-svc-r5xml Jun 7 13:00:29.739: INFO: Got endpoints: latency-svc-r5xml [708.810236ms] Jun 7 13:00:29.771: INFO: Created: latency-svc-kvnvh Jun 7 13:00:29.785: INFO: Got endpoints: latency-svc-kvnvh [718.584973ms] Jun 7 13:00:29.805: INFO: Created: latency-svc-5lt6t Jun 7 13:00:29.822: INFO: Got endpoints: latency-svc-5lt6t [678.090272ms] Jun 7 13:00:29.874: INFO: Created: latency-svc-rzzgv Jun 7 13:00:29.878: INFO: Got endpoints: latency-svc-rzzgv [696.772457ms] Jun 7 13:00:29.907: INFO: Created: latency-svc-x65t2 Jun 7 13:00:29.918: INFO: Got endpoints: latency-svc-x65t2 [700.041544ms] Jun 7 13:00:29.940: INFO: Created: latency-svc-77mq9 Jun 7 13:00:29.960: INFO: Got endpoints: latency-svc-77mq9 [648.796241ms] Jun 7 13:00:30.078: INFO: Created: latency-svc-xbt9s Jun 7 13:00:30.116: INFO: Got endpoints: latency-svc-xbt9s [796.961479ms] Jun 7 13:00:30.117: INFO: Created: latency-svc-j7n6v Jun 7 13:00:30.143: INFO: Got endpoints: latency-svc-j7n6v [786.367756ms] Jun 7 13:00:30.246: INFO: Created: latency-svc-9fnh7 Jun 7 13:00:30.249: INFO: Got endpoints: latency-svc-9fnh7 [837.964ms] Jun 7 13:00:30.279: INFO: Created: latency-svc-l6jh7 Jun 7 13:00:30.291: INFO: Got endpoints: latency-svc-l6jh7 [832.954496ms] Jun 7 13:00:30.310: INFO: Created: latency-svc-547wg Jun 7 13:00:30.322: INFO: Got endpoints: latency-svc-547wg [814.392818ms] Jun 7 13:00:30.341: INFO: Created: latency-svc-qt2sw Jun 7 13:00:30.389: INFO: Got endpoints: latency-svc-qt2sw [839.773589ms] Jun 7 13:00:30.395: INFO: Created: latency-svc-sgnvl Jun 7 13:00:30.406: INFO: Got endpoints: latency-svc-sgnvl [804.28711ms] Jun 7 13:00:30.431: INFO: Created: latency-svc-hxjkv Jun 7 13:00:30.443: INFO: Got endpoints: latency-svc-hxjkv [790.75589ms] Jun 7 13:00:30.458: INFO: Created: latency-svc-cplfz Jun 7 13:00:30.473: INFO: Got endpoints: latency-svc-cplfz [790.770169ms] Jun 7 13:00:30.539: INFO: Created: latency-svc-5wfv8 Jun 7 13:00:30.561: INFO: Got endpoints: latency-svc-5wfv8 [822.144383ms] Jun 7 13:00:30.561: INFO: Created: latency-svc-ppz5f Jun 7 13:00:30.569: INFO: Got endpoints: latency-svc-ppz5f [783.93543ms] Jun 7 13:00:30.587: INFO: Created: latency-svc-pt9np Jun 7 13:00:30.612: INFO: Got endpoints: latency-svc-pt9np [790.306645ms] Jun 7 13:00:30.677: INFO: Created: latency-svc-9wlqx Jun 7 13:00:30.698: INFO: Got endpoints: latency-svc-9wlqx [820.791258ms] Jun 7 13:00:30.699: INFO: Created: latency-svc-dk4hx Jun 7 13:00:30.714: INFO: Got endpoints: latency-svc-dk4hx [796.492515ms] Jun 7 13:00:30.741: INFO: Created: latency-svc-8h7bb Jun 7 13:00:30.751: INFO: Got endpoints: latency-svc-8h7bb [790.464739ms] Jun 7 13:00:30.838: INFO: Created: latency-svc-9wfzx Jun 7 13:00:30.841: INFO: Got endpoints: latency-svc-9wfzx [724.375068ms] Jun 7 13:00:30.903: INFO: Created: latency-svc-q5nht Jun 7 13:00:30.913: INFO: Got endpoints: latency-svc-q5nht [770.633751ms] Jun 7 13:00:31.007: INFO: Created: latency-svc-9thnw Jun 7 13:00:31.037: INFO: Got endpoints: latency-svc-9thnw [788.496155ms] Jun 7 13:00:31.067: INFO: Created: latency-svc-6tkwn Jun 7 13:00:31.082: INFO: Got endpoints: latency-svc-6tkwn [790.576822ms] Jun 7 13:00:31.150: INFO: Created: latency-svc-5zdtv Jun 7 13:00:31.152: INFO: Got endpoints: latency-svc-5zdtv [830.622586ms] Jun 7 13:00:31.185: INFO: Created: latency-svc-mcnhn Jun 7 13:00:31.214: INFO: Got endpoints: latency-svc-mcnhn [825.442205ms] Jun 7 13:00:31.247: INFO: Created: latency-svc-4mcgd Jun 7 13:00:31.305: INFO: Got endpoints: latency-svc-4mcgd [899.096781ms] Jun 7 13:00:31.307: INFO: Created: latency-svc-sb984 Jun 7 13:00:31.317: INFO: Got endpoints: latency-svc-sb984 [874.022535ms] Jun 7 13:00:31.347: INFO: Created: latency-svc-9f9kf Jun 7 13:00:31.359: INFO: Got endpoints: latency-svc-9f9kf [885.757319ms] Jun 7 13:00:31.394: INFO: Created: latency-svc-zjk57 Jun 7 13:00:31.472: INFO: Got endpoints: latency-svc-zjk57 [911.434706ms] Jun 7 13:00:31.499: INFO: Created: latency-svc-px6t7 Jun 7 13:00:31.510: INFO: Got endpoints: latency-svc-px6t7 [940.631134ms] Jun 7 13:00:31.529: INFO: Created: latency-svc-6bz28 Jun 7 13:00:31.540: INFO: Got endpoints: latency-svc-6bz28 [928.354689ms] Jun 7 13:00:31.560: INFO: Created: latency-svc-jqs7j Jun 7 13:00:31.570: INFO: Got endpoints: latency-svc-jqs7j [871.609945ms] Jun 7 13:00:31.630: INFO: Created: latency-svc-t4vtv Jun 7 13:00:31.632: INFO: Got endpoints: latency-svc-t4vtv [918.224246ms] Jun 7 13:00:31.665: INFO: Created: latency-svc-f2rzx Jun 7 13:00:31.680: INFO: Got endpoints: latency-svc-f2rzx [928.883845ms] Jun 7 13:00:31.697: INFO: Created: latency-svc-x9llw Jun 7 13:00:31.710: INFO: Got endpoints: latency-svc-x9llw [868.675152ms] Jun 7 13:00:31.727: INFO: Created: latency-svc-9qhbc Jun 7 13:00:31.784: INFO: Got endpoints: latency-svc-9qhbc [870.670182ms] Jun 7 13:00:31.808: INFO: Created: latency-svc-p29dp Jun 7 13:00:31.840: INFO: Got endpoints: latency-svc-p29dp [802.830784ms] Jun 7 13:00:31.881: INFO: Created: latency-svc-qbt9t Jun 7 13:00:31.922: INFO: Got endpoints: latency-svc-qbt9t [840.122158ms] Jun 7 13:00:31.930: INFO: Created: latency-svc-2wqhx Jun 7 13:00:31.944: INFO: Got endpoints: latency-svc-2wqhx [792.167245ms] Jun 7 13:00:31.979: INFO: Created: latency-svc-jfm5t Jun 7 13:00:31.999: INFO: Got endpoints: latency-svc-jfm5t [784.472296ms] Jun 7 13:00:32.066: INFO: Created: latency-svc-4q4g8 Jun 7 13:00:32.077: INFO: Got endpoints: latency-svc-4q4g8 [771.766425ms] Jun 7 13:00:32.097: INFO: Created: latency-svc-fcqlw Jun 7 13:00:32.107: INFO: Got endpoints: latency-svc-fcqlw [790.387377ms] Jun 7 13:00:32.129: INFO: Created: latency-svc-xst9v Jun 7 13:00:32.144: INFO: Got endpoints: latency-svc-xst9v [784.912692ms] Jun 7 13:00:32.191: INFO: Created: latency-svc-lkmbg Jun 7 13:00:32.210: INFO: Got endpoints: latency-svc-lkmbg [737.990816ms] Jun 7 13:00:32.211: INFO: Created: latency-svc-sqjq9 Jun 7 13:00:32.222: INFO: Got endpoints: latency-svc-sqjq9 [712.320301ms] Jun 7 13:00:32.252: INFO: Created: latency-svc-dr8kl Jun 7 13:00:32.277: INFO: Got endpoints: latency-svc-dr8kl [736.602295ms] Jun 7 13:00:32.325: INFO: Created: latency-svc-rrh7t Jun 7 13:00:32.343: INFO: Got endpoints: latency-svc-rrh7t [772.498571ms] Jun 7 13:00:32.363: INFO: Created: latency-svc-7xq5b Jun 7 13:00:32.374: INFO: Got endpoints: latency-svc-7xq5b [741.851813ms] Jun 7 13:00:32.393: INFO: Created: latency-svc-99kfx Jun 7 13:00:32.404: INFO: Got endpoints: latency-svc-99kfx [723.912983ms] Jun 7 13:00:32.474: INFO: Created: latency-svc-gwfpn Jun 7 13:00:32.476: INFO: Got endpoints: latency-svc-gwfpn [766.170217ms] Jun 7 13:00:32.535: INFO: Created: latency-svc-ncpmq Jun 7 13:00:32.558: INFO: Got endpoints: latency-svc-ncpmq [774.314725ms] Jun 7 13:00:32.558: INFO: Latencies: [93.427962ms 178.119879ms 295.994591ms 339.741627ms 412.467775ms 442.283414ms 487.771226ms 570.479582ms 640.946806ms 648.796241ms 674.289434ms 676.359531ms 678.090272ms 696.772457ms 700.041544ms 706.461337ms 708.810236ms 712.320301ms 718.584973ms 723.186807ms 723.234985ms 723.912983ms 724.107453ms 724.375068ms 724.672219ms 724.778906ms 730.123782ms 730.420718ms 730.572061ms 730.738777ms 730.790866ms 730.791456ms 733.619923ms 733.877628ms 733.922561ms 736.031954ms 736.602295ms 737.990816ms 738.119249ms 739.827777ms 740.336679ms 740.762379ms 741.851813ms 741.917776ms 742.885899ms 745.740733ms 745.891561ms 748.410304ms 748.833835ms 750.576287ms 752.085508ms 754.05273ms 754.500666ms 755.115552ms 755.732654ms 756.529129ms 756.542693ms 758.597645ms 760.843136ms 764.786115ms 765.387038ms 766.170217ms 766.640602ms 767.141925ms 770.633751ms 770.657082ms 771.766425ms 772.220512ms 772.498571ms 772.51669ms 772.990158ms 774.210334ms 774.314725ms 775.069798ms 776.229321ms 776.373329ms 777.498947ms 778.117141ms 778.373761ms 778.947298ms 779.198669ms 780.040899ms 780.402281ms 783.93543ms 784.472296ms 784.905623ms 784.912692ms 786.367756ms 786.483925ms 787.463545ms 788.496155ms 789.489447ms 790.306645ms 790.387377ms 790.464739ms 790.576822ms 790.720087ms 790.75589ms 790.770169ms 790.77278ms 791.018686ms 791.190619ms 792.167245ms 792.223817ms 794.66138ms 796.092096ms 796.492515ms 796.961479ms 797.080268ms 802.691558ms 802.830784ms 804.28711ms 805.413277ms 807.80164ms 808.093349ms 808.368579ms 810.234125ms 814.392818ms 814.567007ms 814.819994ms 817.287212ms 820.791258ms 822.144383ms 825.442205ms 825.747455ms 827.320647ms 827.571347ms 828.835733ms 830.187819ms 830.622586ms 830.689763ms 832.536949ms 832.954496ms 836.066798ms 837.964ms 838.843319ms 839.773589ms 840.122158ms 840.217768ms 842.162285ms 842.781711ms 844.717141ms 847.600251ms 849.837162ms 850.462909ms 852.011299ms 854.730209ms 855.814671ms 859.336429ms 859.800129ms 862.552973ms 863.614458ms 868.675152ms 870.670182ms 871.477678ms 871.609945ms 873.898234ms 874.022535ms 874.566876ms 877.573751ms 878.359359ms 878.779208ms 882.466242ms 882.817794ms 883.73271ms 884.369809ms 885.757319ms 886.179963ms 886.243593ms 886.331012ms 889.752504ms 892.17056ms 892.234623ms 892.306691ms 894.436607ms 895.510437ms 899.096781ms 904.083966ms 904.09601ms 904.142453ms 904.488711ms 911.434706ms 912.632379ms 913.087523ms 914.513381ms 916.114404ms 917.32374ms 918.224246ms 921.363202ms 922.014605ms 928.354689ms 928.883845ms 936.216656ms 938.426695ms 940.631134ms 941.031289ms 946.444416ms 952.309291ms 966.050422ms 982.901041ms] Jun 7 13:00:32.559: INFO: 50 %ile: 791.018686ms Jun 7 13:00:32.559: INFO: 90 %ile: 904.488711ms Jun 7 13:00:32.559: INFO: 99 %ile: 966.050422ms Jun 7 13:00:32.559: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:00:32.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8948" for this suite. Jun 7 13:00:54.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:00:54.699: INFO: namespace svc-latency-8948 deletion completed in 22.126881547s • [SLOW TEST:38.338 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:00:54.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-f9632906-d65b-47b3-8cab-64c18f8ee62a STEP: Creating a pod to test consume secrets Jun 7 13:00:54.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e" in namespace "projected-9158" to be "success or failure" Jun 7 13:00:54.808: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.818517ms Jun 7 13:00:56.893: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099603538s Jun 7 13:00:58.917: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123727253s STEP: Saw pod success Jun 7 13:00:58.917: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e" satisfied condition "success or failure" Jun 7 13:00:58.930: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e container secret-volume-test: STEP: delete the pod Jun 7 13:00:58.955: INFO: Waiting for pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e to disappear Jun 7 13:00:58.959: INFO: Pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:00:58.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9158" for this suite. Jun 7 13:01:04.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:01:05.054: INFO: namespace projected-9158 deletion completed in 6.091197577s • [SLOW TEST:10.354 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:01:05.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 13:01:11.251: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.254: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.258: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.274: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.280: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:11.301: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:16.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.343: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.348: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.351: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:16.368: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:21.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.312: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.319: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.339: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:21.374: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:26.325: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.328: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.334: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.354: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.363: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:26.383: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:31.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.356: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.362: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:31.385: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:36.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.317: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.323: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.344: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.347: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.352: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1) Jun 7 13:01:36.371: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local] Jun 7 13:01:41.384: INFO: DNS probes using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:01:42.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3614" for this suite. Jun 7 13:01:48.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:01:48.426: INFO: namespace dns-3614 deletion completed in 6.229178345s • [SLOW TEST:43.372 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:01:48.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 7 13:01:48.484: INFO: Waiting up to 5m0s for pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231" in namespace "downward-api-2081" to be "success or failure" Jun 7 13:01:48.504: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Pending", Reason="", readiness=false. Elapsed: 19.829816ms Jun 7 13:01:50.508: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023901295s Jun 7 13:01:52.512: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028602366s STEP: Saw pod success Jun 7 13:01:52.513: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231" satisfied condition "success or failure" Jun 7 13:01:52.516: INFO: Trying to get logs from node iruya-worker pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 container dapi-container: STEP: delete the pod Jun 7 13:01:52.572: INFO: Waiting for pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 to disappear Jun 7 13:01:52.584: INFO: Pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:01:52.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2081" for this suite. Jun 7 13:01:58.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:01:58.681: INFO: namespace downward-api-2081 deletion completed in 6.094096525s • [SLOW TEST:10.255 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:01:58.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 13:01:58.783: INFO: Create a RollingUpdate DaemonSet Jun 7 13:01:58.786: INFO: Check that daemon pods launch on every node of the cluster Jun 7 13:01:58.789: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:01:58.792: INFO: Number of nodes with available pods: 0 Jun 7 13:01:58.792: INFO: Node iruya-worker is running more than one daemon pod Jun 7 13:01:59.797: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:01:59.800: INFO: Number of nodes with available pods: 0 Jun 7 13:01:59.800: INFO: Node iruya-worker is running more than one daemon pod Jun 7 13:02:00.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:00.874: INFO: Number of nodes with available pods: 0 Jun 7 13:02:00.874: INFO: Node iruya-worker is running more than one daemon pod Jun 7 13:02:02.002: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:02.006: INFO: Number of nodes with available pods: 0 Jun 7 13:02:02.006: INFO: Node iruya-worker is running more than one daemon pod Jun 7 13:02:02.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:02.803: INFO: Number of nodes with available pods: 1 Jun 7 13:02:02.803: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 13:02:03.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:03.802: INFO: Number of nodes with available pods: 2 Jun 7 13:02:03.802: INFO: Number of running nodes: 2, number of available pods: 2 Jun 7 13:02:03.802: INFO: Update the DaemonSet to trigger a rollout Jun 7 13:02:03.809: INFO: Updating DaemonSet daemon-set Jun 7 13:02:12.831: INFO: Roll back the DaemonSet before rollout is complete Jun 7 13:02:12.838: INFO: Updating DaemonSet daemon-set Jun 7 13:02:12.838: INFO: Make sure DaemonSet rollback is complete Jun 7 13:02:12.842: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 7 13:02:12.842: INFO: Pod daemon-set-5nzfm is not available Jun 7 13:02:12.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:13.852: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 7 13:02:13.852: INFO: Pod daemon-set-5nzfm is not available Jun 7 13:02:13.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:14.888: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 7 13:02:14.888: INFO: Pod daemon-set-5nzfm is not available Jun 7 13:02:14.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 13:02:15.852: INFO: Pod daemon-set-h5b4f is not available Jun 7 13:02:15.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1827, will wait for the garbage collector to delete the pods Jun 7 13:02:15.919: INFO: Deleting DaemonSet.extensions daemon-set took: 6.589255ms Jun 7 13:02:16.219: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.326656ms Jun 7 13:02:22.223: INFO: Number of nodes with available pods: 0 Jun 7 13:02:22.223: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 13:02:22.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1827/daemonsets","resourceVersion":"15149110"},"items":null} Jun 7 13:02:22.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1827/pods","resourceVersion":"15149110"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:02:22.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1827" for this suite. Jun 7 13:02:28.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:02:28.393: INFO: namespace daemonsets-1827 deletion completed in 6.147047273s • [SLOW TEST:29.711 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:02:28.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:02:32.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7371" for this suite. Jun 7 13:03:10.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:03:10.595: INFO: namespace kubelet-test-7371 deletion completed in 38.107606377s • [SLOW TEST:42.200 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:03:10.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 7 13:03:10.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8756' Jun 7 13:03:10.938: INFO: stderr: "" Jun 7 13:03:10.938: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 7 13:03:11.967: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:03:11.967: INFO: Found 0 / 1 Jun 7 13:03:12.942: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:03:12.942: INFO: Found 0 / 1 Jun 7 13:03:13.943: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:03:13.943: INFO: Found 0 / 1 Jun 7 13:03:14.961: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:03:14.961: INFO: Found 1 / 1 Jun 7 13:03:14.961: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 7 13:03:14.964: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:03:14.964: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 7 13:03:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756' Jun 7 13:03:15.075: INFO: stderr: "" Jun 7 13:03:15.075: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jun 13:03:13.691 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jun 13:03:13.691 # Server started, Redis version 3.2.12\n1:M 07 Jun 13:03:13.691 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 7 13:03:15.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --tail=1' Jun 7 13:03:15.200: INFO: stderr: "" Jun 7 13:03:15.200: INFO: stdout: "1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 7 13:03:15.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --limit-bytes=1' Jun 7 13:03:15.299: INFO: stderr: "" Jun 7 13:03:15.300: INFO: stdout: " " STEP: exposing timestamps Jun 7 13:03:15.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --tail=1 --timestamps' Jun 7 13:03:15.397: INFO: stderr: "" Jun 7 13:03:15.397: INFO: stdout: "2020-06-07T13:03:13.691965821Z 1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 7 13:03:17.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --since=1s' Jun 7 13:03:18.011: INFO: stderr: "" Jun 7 13:03:18.011: INFO: stdout: "" Jun 7 13:03:18.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --since=24h' Jun 7 13:03:18.122: INFO: stderr: "" Jun 7 13:03:18.122: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jun 13:03:13.691 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jun 13:03:13.691 # Server started, Redis version 3.2.12\n1:M 07 Jun 13:03:13.691 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 7 13:03:18.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8756' Jun 7 13:03:18.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 13:03:18.226: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 7 13:03:18.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8756' Jun 7 13:03:18.321: INFO: stderr: "No resources found.\n" Jun 7 13:03:18.321: INFO: stdout: "" Jun 7 13:03:18.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8756 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 13:03:18.410: INFO: stderr: "" Jun 7 13:03:18.410: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:03:18.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8756" for this suite. Jun 7 13:03:40.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:03:40.520: INFO: namespace kubectl-8756 deletion completed in 22.105686094s • [SLOW TEST:29.924 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:03:40.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-rnk4 STEP: Creating a pod to test atomic-volume-subpath Jun 7 13:03:40.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rnk4" in namespace "subpath-1042" to be "success or failure" Jun 7 13:03:40.800: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.740146ms Jun 7 13:03:42.805: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008436244s Jun 7 13:03:44.809: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 4.012612104s Jun 7 13:03:46.814: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 6.017375656s Jun 7 13:03:48.819: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 8.022431471s Jun 7 13:03:50.823: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 10.027033504s Jun 7 13:03:52.828: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 12.032097751s Jun 7 13:03:54.832: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 14.036098107s Jun 7 13:03:56.837: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 16.040947617s Jun 7 13:03:58.842: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 18.045261202s Jun 7 13:04:00.847: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 20.050199148s Jun 7 13:04:02.851: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 22.054380144s Jun 7 13:04:04.931: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135009936s STEP: Saw pod success Jun 7 13:04:04.931: INFO: Pod "pod-subpath-test-downwardapi-rnk4" satisfied condition "success or failure" Jun 7 13:04:04.940: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-rnk4 container test-container-subpath-downwardapi-rnk4: STEP: delete the pod Jun 7 13:04:04.967: INFO: Waiting for pod pod-subpath-test-downwardapi-rnk4 to disappear Jun 7 13:04:04.970: INFO: Pod pod-subpath-test-downwardapi-rnk4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rnk4 Jun 7 13:04:04.970: INFO: Deleting pod "pod-subpath-test-downwardapi-rnk4" in namespace "subpath-1042" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:04:04.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1042" for this suite. Jun 7 13:04:10.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:04:11.084: INFO: namespace subpath-1042 deletion completed in 6.105845134s • [SLOW TEST:30.564 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:04:11.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 in namespace container-probe-5990 Jun 7 13:04:15.178: INFO: Started pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 in namespace container-probe-5990 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 13:04:15.182: INFO: Initial restart count of pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 is 0 Jun 7 13:04:33.284: INFO: Restart count of pod container-probe-5990/liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 is now 1 (18.102294158s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:04:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5990" for this suite. Jun 7 13:04:39.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:04:39.394: INFO: namespace container-probe-5990 deletion completed in 6.090210541s • [SLOW TEST:28.310 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:04:39.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 7 13:04:43.482: INFO: Pod pod-hostip-70edd121-a0b2-4d26-9b7f-de4f44f98fec has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:04:43.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-420" for this suite. Jun 7 13:05:05.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:05:05.599: INFO: namespace pods-420 deletion completed in 22.11351153s • [SLOW TEST:26.204 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:05:05.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[] Jun 7 13:05:05.738: INFO: Get endpoints failed (11.745462ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 7 13:05:06.742: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[] (1.0157566s elapsed) STEP: Creating pod pod1 in namespace services-1636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod1:[80]] Jun 7 13:05:10.807: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod1:[80]] (4.057282288s elapsed) STEP: Creating pod pod2 in namespace services-1636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod1:[80] pod2:[80]] Jun 7 13:05:14.910: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod1:[80] pod2:[80]] (4.099118471s elapsed) STEP: Deleting pod pod1 in namespace services-1636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod2:[80]] Jun 7 13:05:15.936: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod2:[80]] (1.022147075s elapsed) STEP: Deleting pod pod2 in namespace services-1636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[] Jun 7 13:05:16.954: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[] (1.012874227s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:05:16.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1636" for this suite. Jun 7 13:05:23.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:05:23.145: INFO: namespace services-1636 deletion completed in 6.13270216s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:17.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:05:23.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 7 13:05:27.302: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 7 13:05:32.417: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:05:32.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4319" for this suite. Jun 7 13:05:38.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:05:38.590: INFO: namespace pods-4319 deletion completed in 6.167259363s • [SLOW TEST:15.445 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:05:38.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 7 13:05:38.673: INFO: Waiting up to 5m0s for pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c" in namespace "downward-api-5606" to be "success or failure" Jun 7 13:05:38.699: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.693138ms Jun 7 13:05:40.703: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030164892s Jun 7 13:05:42.707: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034202811s STEP: Saw pod success Jun 7 13:05:42.707: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c" satisfied condition "success or failure" Jun 7 13:05:42.710: INFO: Trying to get logs from node iruya-worker2 pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c container dapi-container: STEP: delete the pod Jun 7 13:05:42.728: INFO: Waiting for pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c to disappear Jun 7 13:05:42.733: INFO: Pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:05:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5606" for this suite. Jun 7 13:05:48.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:05:48.831: INFO: namespace downward-api-5606 deletion completed in 6.093829926s • [SLOW TEST:10.241 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:05:48.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 7 13:05:48.921: INFO: Waiting up to 5m0s for pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573" in namespace "emptydir-5614" to be "success or failure" Jun 7 13:05:48.924: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23148ms Jun 7 13:05:50.993: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072212427s Jun 7 13:05:52.998: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076889159s STEP: Saw pod success Jun 7 13:05:52.998: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573" satisfied condition "success or failure" Jun 7 13:05:53.000: INFO: Trying to get logs from node iruya-worker pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 container test-container: STEP: delete the pod Jun 7 13:05:53.091: INFO: Waiting for pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 to disappear Jun 7 13:05:53.102: INFO: Pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:05:53.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5614" for this suite. Jun 7 13:05:59.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:05:59.196: INFO: namespace emptydir-5614 deletion completed in 6.09094215s • [SLOW TEST:10.365 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:05:59.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 13:06:05.378: INFO: DNS probes using dns-test-a3182278-a86c-493f-8ec3-ba1bfaa8e730 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 13:06:11.534: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:11.537: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:11.537: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local] Jun 7 13:06:16.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:16.547: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:16.547: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local] Jun 7 13:06:21.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:21.568: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:21.568: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local] Jun 7 13:06:26.549: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:26.553: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:26.553: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local] Jun 7 13:06:31.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:31.546: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 13:06:31.546: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local] Jun 7 13:06:36.587: INFO: DNS probes using dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 13:06:43.016: INFO: DNS probes using dns-test-10b454b5-cf36-4434-afc3-5f55ba509c3b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:06:43.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4343" for this suite. Jun 7 13:06:49.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:06:49.228: INFO: namespace dns-4343 deletion completed in 6.095254599s • [SLOW TEST:50.032 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:06:49.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 7 13:06:49.307: INFO: Waiting up to 5m0s for pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c" in namespace "emptydir-6540" to be "success or failure" Jun 7 13:06:49.315: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922356ms Jun 7 13:06:51.406: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099057483s Jun 7 13:06:53.413: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105697445s STEP: Saw pod success Jun 7 13:06:53.413: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c" satisfied condition "success or failure" Jun 7 13:06:53.416: INFO: Trying to get logs from node iruya-worker2 pod pod-af2c2215-762e-4003-8b77-6834ae08d92c container test-container: STEP: delete the pod Jun 7 13:06:53.455: INFO: Waiting for pod pod-af2c2215-762e-4003-8b77-6834ae08d92c to disappear Jun 7 13:06:53.459: INFO: Pod pod-af2c2215-762e-4003-8b77-6834ae08d92c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:06:53.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6540" for this suite. Jun 7 13:06:59.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:06:59.547: INFO: namespace emptydir-6540 deletion completed in 6.084778232s • [SLOW TEST:10.319 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:06:59.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 13:06:59.615: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 7 13:07:04.620: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 13:07:04.620: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 7 13:07:04.774: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/deployments/test-cleanup-deployment,UID:8b642ef4-0f20-4ae3-b811-96db351d6b8b,ResourceVersion:15150113,Generation:1,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 7 13:07:04.811: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/replicasets/test-cleanup-deployment-55bbcbc84c,UID:b83ff74d-4f9c-491c-a591-991ee08e9f5a,ResourceVersion:15150115,Generation:1,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8b642ef4-0f20-4ae3-b811-96db351d6b8b 0xc0028c1627 0xc0028c1628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 13:07:04.811: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 7 13:07:04.812: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/replicasets/test-cleanup-controller,UID:3a9c4e9e-a9bc-4f94-87fb-a7d01d8df0ab,ResourceVersion:15150114,Generation:1,CreationTimestamp:2020-06-07 13:06:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8b642ef4-0f20-4ae3-b811-96db351d6b8b 0xc0028c1557 0xc0028c1558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 7 13:07:04.961: INFO: Pod "test-cleanup-controller-nkmqp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-nkmqp,GenerateName:test-cleanup-controller-,Namespace:deployment-4549,SelfLink:/api/v1/namespaces/deployment-4549/pods/test-cleanup-controller-nkmqp,UID:0c4ee989-d90a-499f-b50f-a9c13a69ecb7,ResourceVersion:15150107,Generation:0,CreationTimestamp:2020-06-07 13:06:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3a9c4e9e-a9bc-4f94-87fb-a7d01d8df0ab 0xc0028c1ef7 0xc0028c1ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-l6bb8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l6bb8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-l6bb8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028c1f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028c1f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:06:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:06:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.124,StartTime:2020-06-07 13:06:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 13:07:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://24b840eaca758c6d2d0addcdc58550400716d01d083e453048abc126740e8942}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 13:07:04.961: INFO: Pod "test-cleanup-deployment-55bbcbc84c-nb22c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-nb22c,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4549,SelfLink:/api/v1/namespaces/deployment-4549/pods/test-cleanup-deployment-55bbcbc84c-nb22c,UID:ac49a01f-4930-4e23-87ab-087ba9d7f9c3,ResourceVersion:15150121,Generation:0,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c b83ff74d-4f9c-491c-a591-991ee08e9f5a 0xc00180a077 0xc00180a078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-l6bb8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l6bb8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-l6bb8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00180a0f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00180a110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:07:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4549" for this suite. Jun 7 13:07:11.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:07:11.122: INFO: namespace deployment-4549 deletion completed in 6.150461884s • [SLOW TEST:11.575 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:07:11.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8539 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 13:07:11.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 13:07:37.292: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8539 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:07:37.292: INFO: >>> kubeConfig: /root/.kube/config I0607 13:07:37.335428 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Create stream I0607 13:07:37.335482 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream added, broadcasting: 1 I0607 13:07:37.409788 6 log.go:172] (0xc000b6b290) Reply frame received for 1 I0607 13:07:37.409841 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Create stream I0607 13:07:37.409854 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Stream added, broadcasting: 3 I0607 13:07:37.410989 6 log.go:172] (0xc000b6b290) Reply frame received for 3 I0607 13:07:37.411008 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Create stream I0607 13:07:37.411014 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Stream added, broadcasting: 5 I0607 13:07:37.411943 6 log.go:172] (0xc000b6b290) Reply frame received for 5 I0607 13:07:38.551933 6 log.go:172] (0xc000b6b290) Data frame received for 3 I0607 13:07:38.551971 6 log.go:172] (0xc0025c0460) (3) Data frame handling I0607 13:07:38.552000 6 log.go:172] (0xc0025c0460) (3) Data frame sent I0607 13:07:38.552385 6 log.go:172] (0xc000b6b290) Data frame received for 5 I0607 13:07:38.552416 6 log.go:172] (0xc0011f0000) (5) Data frame handling I0607 13:07:38.552453 6 log.go:172] (0xc000b6b290) Data frame received for 3 I0607 13:07:38.552480 6 log.go:172] (0xc0025c0460) (3) Data frame handling I0607 13:07:38.554732 6 log.go:172] (0xc000b6b290) Data frame received for 1 I0607 13:07:38.554771 6 log.go:172] (0xc0025c0320) (1) Data frame handling I0607 13:07:38.554795 6 log.go:172] (0xc0025c0320) (1) Data frame sent I0607 13:07:38.554812 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream removed, broadcasting: 1 I0607 13:07:38.554830 6 log.go:172] (0xc000b6b290) Go away received I0607 13:07:38.554987 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream removed, broadcasting: 1 I0607 13:07:38.555012 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Stream removed, broadcasting: 3 I0607 13:07:38.555025 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Stream removed, broadcasting: 5 Jun 7 13:07:38.555: INFO: Found all expected endpoints: [netserver-0] Jun 7 13:07:38.559: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.141 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8539 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:07:38.559: INFO: >>> kubeConfig: /root/.kube/config I0607 13:07:38.595532 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Create stream I0607 13:07:38.595564 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream added, broadcasting: 1 I0607 13:07:38.598175 6 log.go:172] (0xc0009e0b00) Reply frame received for 1 I0607 13:07:38.598226 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Create stream I0607 13:07:38.598248 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Stream added, broadcasting: 3 I0607 13:07:38.599527 6 log.go:172] (0xc0009e0b00) Reply frame received for 3 I0607 13:07:38.599580 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Create stream I0607 13:07:38.599602 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Stream added, broadcasting: 5 I0607 13:07:38.600568 6 log.go:172] (0xc0009e0b00) Reply frame received for 5 I0607 13:07:39.671127 6 log.go:172] (0xc0009e0b00) Data frame received for 3 I0607 13:07:39.671174 6 log.go:172] (0xc00107a000) (3) Data frame handling I0607 13:07:39.671201 6 log.go:172] (0xc00107a000) (3) Data frame sent I0607 13:07:39.671223 6 log.go:172] (0xc0009e0b00) Data frame received for 3 I0607 13:07:39.671316 6 log.go:172] (0xc00107a000) (3) Data frame handling I0607 13:07:39.671927 6 log.go:172] (0xc0009e0b00) Data frame received for 5 I0607 13:07:39.671963 6 log.go:172] (0xc00107a1e0) (5) Data frame handling I0607 13:07:39.674415 6 log.go:172] (0xc0009e0b00) Data frame received for 1 I0607 13:07:39.674499 6 log.go:172] (0xc0011f0460) (1) Data frame handling I0607 13:07:39.674545 6 log.go:172] (0xc0011f0460) (1) Data frame sent I0607 13:07:39.674578 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream removed, broadcasting: 1 I0607 13:07:39.674608 6 log.go:172] (0xc0009e0b00) Go away received I0607 13:07:39.674799 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream removed, broadcasting: 1 I0607 13:07:39.674835 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Stream removed, broadcasting: 3 I0607 13:07:39.674851 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Stream removed, broadcasting: 5 Jun 7 13:07:39.674: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:07:39.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8539" for this suite. Jun 7 13:08:03.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:08:03.772: INFO: namespace pod-network-test-8539 deletion completed in 24.09186073s • [SLOW TEST:52.649 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:08:03.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 13:08:03.882: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 7 13:08:08.887: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 13:08:08.887: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 7 13:08:10.892: INFO: Creating deployment "test-rollover-deployment" Jun 7 13:08:10.904: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 7 13:08:12.911: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 7 13:08:12.918: INFO: Ensure that both replica sets have 1 created replica Jun 7 13:08:12.924: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 7 13:08:12.930: INFO: Updating deployment test-rollover-deployment Jun 7 13:08:12.930: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 7 13:08:15.006: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 7 13:08:15.011: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 7 13:08:15.016: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:15.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132093, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:17.023: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:17.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:19.023: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:19.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:21.024: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:21.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:23.025: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:23.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:25.024: INFO: all replica sets need to contain the pod-template-hash label Jun 7 13:08:25.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:08:27.022: INFO: Jun 7 13:08:27.023: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 7 13:08:27.028: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/deployments/test-rollover-deployment,UID:2e006d3a-61d0-4541-bfc8-567e45960965,ResourceVersion:15150459,Generation:2,CreationTimestamp:2020-06-07 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-07 13:08:10 +0000 UTC 2020-06-07 13:08:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-07 13:08:26 +0000 UTC 2020-06-07 13:08:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 7 13:08:27.032: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-deployment-854595fc44,UID:0e47c3cd-6f76-4af7-860e-83a1c4e90b1f,ResourceVersion:15150448,Generation:2,CreationTimestamp:2020-06-07 13:08:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc000d5e147 0xc000d5e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 7 13:08:27.032: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 7 13:08:27.032: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-controller,UID:e665f422-1f3d-4bd1-8d2a-93f1f02ef646,ResourceVersion:15150457,Generation:2,CreationTimestamp:2020-06-07 13:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc002d11e77 0xc002d11e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 13:08:27.032: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-deployment-9b8b997cf,UID:077cdcdc-1ba8-41f8-8e01-684563c06f4c,ResourceVersion:15150409,Generation:2,CreationTimestamp:2020-06-07 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc000d5e230 0xc000d5e231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 13:08:27.035: INFO: Pod "test-rollover-deployment-854595fc44-7dmvm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-7dmvm,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3883,SelfLink:/api/v1/namespaces/deployment-3883/pods/test-rollover-deployment-854595fc44-7dmvm,UID:fe09be12-29c0-43bf-9cc3-2565df513b8c,ResourceVersion:15150424,Generation:0,CreationTimestamp:2020-06-07 13:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0e47c3cd-6f76-4af7-860e-83a1c4e90b1f 0xc001b05bc7 0xc001b05bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wgkpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wgkpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wgkpm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b05c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b05c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.129,StartTime:2020-06-07 13:08:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-07 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://fe3a00547fa43ed4998edfa49a24590a83b64a26c0cd1a09bce026082b41480e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:08:27.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3883" for this suite. Jun 7 13:08:33.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:08:33.266: INFO: namespace deployment-3883 deletion completed in 6.228047426s • [SLOW TEST:29.495 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:08:33.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 13:08:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2561' Jun 7 13:08:36.184: INFO: stderr: "" Jun 7 13:08:36.184: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 7 13:08:36.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2561' Jun 7 13:08:36.494: INFO: stderr: "" Jun 7 13:08:36.494: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 7 13:08:37.499: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:08:37.499: INFO: Found 0 / 1 Jun 7 13:08:38.500: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:08:38.500: INFO: Found 0 / 1 Jun 7 13:08:39.499: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:08:39.499: INFO: Found 0 / 1 Jun 7 13:08:40.499: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:08:40.499: INFO: Found 1 / 1 Jun 7 13:08:40.499: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 7 13:08:40.503: INFO: Selector matched 1 pods for map[app:redis] Jun 7 13:08:40.503: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 13:08:40.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cqw9b --namespace=kubectl-2561' Jun 7 13:08:40.618: INFO: stderr: "" Jun 7 13:08:40.618: INFO: stdout: "Name: redis-master-cqw9b\nNamespace: kubectl-2561\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Sun, 07 Jun 2020 13:08:36 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.130\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://4e1bd824d6d8d5d310ed724744bd039ae0bd5fc7c32d7f0193d2919c4d2c8c5f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 07 Jun 2020 13:08:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-224lc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-224lc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-224lc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2561/redis-master-cqw9b to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Jun 7 13:08:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2561' Jun 7 13:08:40.732: INFO: stderr: "" Jun 7 13:08:40.732: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2561\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-cqw9b\n" Jun 7 13:08:40.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2561' Jun 7 13:08:40.843: INFO: stderr: "" Jun 7 13:08:40.844: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2561\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.205.146\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.130:6379\nSession Affinity: None\nEvents: \n" Jun 7 13:08:40.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 7 13:08:40.971: INFO: stderr: "" Jun 7 13:08:40.971: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 83d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 7 13:08:40.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2561' Jun 7 13:08:41.076: INFO: stderr: "" Jun 7 13:08:41.076: INFO: stdout: "Name: kubectl-2561\nLabels: e2e-framework=kubectl\n e2e-run=c47f29a4-0a06-4452-bdd7-01d332ca5e07\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:08:41.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2561" for this suite. Jun 7 13:09:03.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:09:03.202: INFO: namespace kubectl-2561 deletion completed in 22.122785937s • [SLOW TEST:29.936 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:09:03.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 13:09:09.407: INFO: DNS probes using dns-7852/dns-test-af2103af-ebd5-4347-8dc7-18656c9be5c4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:09:09.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7852" for this suite. Jun 7 13:09:15.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:09:15.709: INFO: namespace dns-7852 deletion completed in 6.093317528s • [SLOW TEST:12.507 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:09:15.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 7 13:09:15.947: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:09:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1969" for this suite. Jun 7 13:09:46.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:09:46.429: INFO: namespace init-container-1969 deletion completed in 22.115184927s • [SLOW TEST:30.719 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:09:46.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-6204cb11-b4a7-4c02-b797-ae76ced9d12f STEP: Creating a pod to test consume secrets Jun 7 13:09:46.510: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276" in namespace "projected-7451" to be "success or failure" Jun 7 13:09:46.514: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861488ms Jun 7 13:09:48.518: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008308523s Jun 7 13:09:50.522: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012441739s STEP: Saw pod success Jun 7 13:09:50.522: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276" satisfied condition "success or failure" Jun 7 13:09:50.525: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 container projected-secret-volume-test: STEP: delete the pod Jun 7 13:09:51.044: INFO: Waiting for pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 to disappear Jun 7 13:09:51.082: INFO: Pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:09:51.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7451" for this suite. Jun 7 13:09:57.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:09:57.544: INFO: namespace projected-7451 deletion completed in 6.457735295s • [SLOW TEST:11.115 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:09:57.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 7 13:09:57.611: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 13:09:57.618: INFO: Waiting for terminating namespaces to be deleted... Jun 7 13:09:57.620: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 7 13:09:57.625: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.625: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 13:09:57.625: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.625: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 13:09:57.625: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 7 13:09:57.655: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.655: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 13:09:57.655: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.655: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 13:09:57.655: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.655: INFO: Container coredns ready: true, restart count 0 Jun 7 13:09:57.655: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 7 13:09:57.655: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:10:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6062" for this suite. Jun 7 13:10:23.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:10:23.951: INFO: namespace sched-pred-6062 deletion completed in 18.115351129s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.408 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:10:23.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e85892e8-e253-4012-be67-82f6daa9847e STEP: Creating a pod to test consume secrets Jun 7 13:10:24.157: INFO: Waiting up to 5m0s for pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa" in namespace "secrets-7953" to be "success or failure" Jun 7 13:10:24.160: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54021ms Jun 7 13:10:26.219: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061410609s Jun 7 13:10:28.223: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065847357s STEP: Saw pod success Jun 7 13:10:28.223: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa" satisfied condition "success or failure" Jun 7 13:10:28.226: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa container secret-volume-test: STEP: delete the pod Jun 7 13:10:28.247: INFO: Waiting for pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa to disappear Jun 7 13:10:28.320: INFO: Pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:10:28.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7953" for this suite. Jun 7 13:10:34.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:10:34.430: INFO: namespace secrets-7953 deletion completed in 6.104901434s • [SLOW TEST:10.479 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:10:34.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 7 13:10:39.035: INFO: Successfully updated pod "labelsupdate1476ea7c-7ef8-4f57-bf0d-2b1efb159ca6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:10:41.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7957" for this suite. Jun 7 13:11:03.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:11:03.168: INFO: namespace projected-7957 deletion completed in 22.093175465s • [SLOW TEST:28.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:11:03.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-86871ee0-444f-4be1-a576-1efd8b0ef3f3 STEP: Creating a pod to test consume configMaps Jun 7 13:11:03.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a" in namespace "projected-4058" to be "success or failure" Jun 7 13:11:03.275: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.350985ms Jun 7 13:11:05.280: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034121786s Jun 7 13:11:07.284: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037883185s STEP: Saw pod success Jun 7 13:11:07.284: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a" satisfied condition "success or failure" Jun 7 13:11:07.286: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a container projected-configmap-volume-test: STEP: delete the pod Jun 7 13:11:07.407: INFO: Waiting for pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a to disappear Jun 7 13:11:07.412: INFO: Pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:11:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4058" for this suite. Jun 7 13:11:13.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:11:13.537: INFO: namespace projected-4058 deletion completed in 6.11897773s • [SLOW TEST:10.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:11:13.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 7 13:11:13.625: INFO: Waiting up to 5m0s for pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608" in namespace "emptydir-8029" to be "success or failure" Jun 7 13:11:13.635: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Pending", Reason="", readiness=false. Elapsed: 9.717463ms Jun 7 13:11:15.639: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958666s Jun 7 13:11:17.643: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017812972s STEP: Saw pod success Jun 7 13:11:17.643: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608" satisfied condition "success or failure" Jun 7 13:11:17.645: INFO: Trying to get logs from node iruya-worker2 pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 container test-container: STEP: delete the pod Jun 7 13:11:17.661: INFO: Waiting for pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 to disappear Jun 7 13:11:17.665: INFO: Pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:11:17.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8029" for this suite. Jun 7 13:11:23.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:11:23.773: INFO: namespace emptydir-8029 deletion completed in 6.105348375s • [SLOW TEST:10.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:11:23.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0e25059e-2fa6-4b95-af62-87cbeddf5c23 STEP: Creating a pod to test consume secrets Jun 7 13:11:23.883: INFO: Waiting up to 5m0s for pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b" in namespace "secrets-2217" to be "success or failure" Jun 7 13:11:23.903: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.253551ms Jun 7 13:11:25.908: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024805389s Jun 7 13:11:27.912: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029066786s STEP: Saw pod success Jun 7 13:11:27.912: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b" satisfied condition "success or failure" Jun 7 13:11:27.915: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b container secret-volume-test: STEP: delete the pod Jun 7 13:11:27.948: INFO: Waiting for pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b to disappear Jun 7 13:11:27.953: INFO: Pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:11:27.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2217" for this suite. Jun 7 13:11:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:11:34.053: INFO: namespace secrets-2217 deletion completed in 6.097576367s • [SLOW TEST:10.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:11:34.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-45a330c6-563e-43eb-94de-cd5b9fab5eda STEP: Creating a pod to test consume configMaps Jun 7 13:11:34.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7" in namespace "projected-7135" to be "success or failure" Jun 7 13:11:34.167: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.913494ms Jun 7 13:11:36.192: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030429217s Jun 7 13:11:38.195: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034042144s STEP: Saw pod success Jun 7 13:11:38.196: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7" satisfied condition "success or failure" Jun 7 13:11:38.198: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 container projected-configmap-volume-test: STEP: delete the pod Jun 7 13:11:38.521: INFO: Waiting for pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 to disappear Jun 7 13:11:38.557: INFO: Pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:11:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7135" for this suite. Jun 7 13:11:44.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:11:44.722: INFO: namespace projected-7135 deletion completed in 6.161702335s • [SLOW TEST:10.668 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:11:44.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-vx8n STEP: Creating a pod to test atomic-volume-subpath Jun 7 13:11:44.848: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vx8n" in namespace "subpath-9374" to be "success or failure" Jun 7 13:11:44.856: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.36031ms Jun 7 13:11:46.861: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013230945s Jun 7 13:11:48.866: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 4.018140736s Jun 7 13:11:50.871: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 6.022819358s Jun 7 13:11:52.875: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 8.027220647s Jun 7 13:11:54.880: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 10.031986441s Jun 7 13:11:56.884: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 12.036294021s Jun 7 13:11:58.888: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 14.040011304s Jun 7 13:12:00.893: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 16.045345599s Jun 7 13:12:02.898: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 18.049871972s Jun 7 13:12:04.903: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 20.054585518s Jun 7 13:12:06.907: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 22.059122723s Jun 7 13:12:08.912: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063974117s STEP: Saw pod success Jun 7 13:12:08.912: INFO: Pod "pod-subpath-test-configmap-vx8n" satisfied condition "success or failure" Jun 7 13:12:08.915: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-vx8n container test-container-subpath-configmap-vx8n: STEP: delete the pod Jun 7 13:12:08.983: INFO: Waiting for pod pod-subpath-test-configmap-vx8n to disappear Jun 7 13:12:08.989: INFO: Pod pod-subpath-test-configmap-vx8n no longer exists STEP: Deleting pod pod-subpath-test-configmap-vx8n Jun 7 13:12:08.989: INFO: Deleting pod "pod-subpath-test-configmap-vx8n" in namespace "subpath-9374" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:12:08.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9374" for this suite. Jun 7 13:12:15.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:12:15.088: INFO: namespace subpath-9374 deletion completed in 6.092473451s • [SLOW TEST:30.364 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:12:15.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:12:41.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5138" for this suite. Jun 7 13:12:47.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:12:47.468: INFO: namespace namespaces-5138 deletion completed in 6.086039225s STEP: Destroying namespace "nsdeletetest-8702" for this suite. Jun 7 13:12:47.470: INFO: Namespace nsdeletetest-8702 was already deleted STEP: Destroying namespace "nsdeletetest-9657" for this suite. Jun 7 13:12:53.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:12:53.575: INFO: namespace nsdeletetest-9657 deletion completed in 6.105888105s • [SLOW TEST:38.486 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:12:53.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-fc8cf3a1-911e-473c-bec4-a8f821e9e34d STEP: Creating secret with name s-test-opt-upd-98a3f903-13bd-4cdc-802a-3c299c62b3e9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fc8cf3a1-911e-473c-bec4-a8f821e9e34d STEP: Updating secret s-test-opt-upd-98a3f903-13bd-4cdc-802a-3c299c62b3e9 STEP: Creating secret with name s-test-opt-create-a1d834a7-8971-45c5-9db7-75f77e64d9b8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:13:01.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-869" for this suite. Jun 7 13:13:23.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:13:23.902: INFO: namespace secrets-869 deletion completed in 22.12511957s • [SLOW TEST:30.326 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:13:23.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-14 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-14 STEP: Deleting pre-stop pod Jun 7 13:13:37.010: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:13:37.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-14" for this suite. Jun 7 13:14:15.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:14:15.160: INFO: namespace prestop-14 deletion completed in 38.131781175s • [SLOW TEST:51.257 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:14:15.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-df3c1850-629c-404c-9118-fa5267ef94cc STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:14:21.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9944" for this suite. Jun 7 13:14:43.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:14:43.422: INFO: namespace configmap-9944 deletion completed in 22.106409992s • [SLOW TEST:28.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:14:43.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7394 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 13:14:43.454: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 13:15:09.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=udp&host=10.244.1.151&port=8081&tries=1'] Namespace:pod-network-test-7394 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:15:09.622: INFO: >>> kubeConfig: /root/.kube/config I0607 13:15:09.647922 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Create stream I0607 13:15:09.647988 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream added, broadcasting: 1 I0607 13:15:09.651076 6 log.go:172] (0xc001bfc8f0) Reply frame received for 1 I0607 13:15:09.651110 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Create stream I0607 13:15:09.651116 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Stream added, broadcasting: 3 I0607 13:15:09.651979 6 log.go:172] (0xc001bfc8f0) Reply frame received for 3 I0607 13:15:09.652012 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Create stream I0607 13:15:09.652029 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Stream added, broadcasting: 5 I0607 13:15:09.652942 6 log.go:172] (0xc001bfc8f0) Reply frame received for 5 I0607 13:15:09.780942 6 log.go:172] (0xc001bfc8f0) Data frame received for 3 I0607 13:15:09.780986 6 log.go:172] (0xc0021d4d20) (3) Data frame handling I0607 13:15:09.781009 6 log.go:172] (0xc0021d4d20) (3) Data frame sent I0607 13:15:09.781907 6 log.go:172] (0xc001bfc8f0) Data frame received for 3 I0607 13:15:09.781942 6 log.go:172] (0xc0021d4d20) (3) Data frame handling I0607 13:15:09.782065 6 log.go:172] (0xc001bfc8f0) Data frame received for 5 I0607 13:15:09.782096 6 log.go:172] (0xc0025c00a0) (5) Data frame handling I0607 13:15:09.783935 6 log.go:172] (0xc001bfc8f0) Data frame received for 1 I0607 13:15:09.783957 6 log.go:172] (0xc0025c0000) (1) Data frame handling I0607 13:15:09.783969 6 log.go:172] (0xc0025c0000) (1) Data frame sent I0607 13:15:09.783992 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream removed, broadcasting: 1 I0607 13:15:09.784022 6 log.go:172] (0xc001bfc8f0) Go away received I0607 13:15:09.784111 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream removed, broadcasting: 1 I0607 13:15:09.784128 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Stream removed, broadcasting: 3 I0607 13:15:09.784152 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Stream removed, broadcasting: 5 Jun 7 13:15:09.784: INFO: Waiting for endpoints: map[] Jun 7 13:15:09.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=udp&host=10.244.2.140&port=8081&tries=1'] Namespace:pod-network-test-7394 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:15:09.788: INFO: >>> kubeConfig: /root/.kube/config I0607 13:15:09.823009 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Create stream I0607 13:15:09.823039 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream added, broadcasting: 1 I0607 13:15:09.826843 6 log.go:172] (0xc0023da2c0) Reply frame received for 1 I0607 13:15:09.826883 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Create stream I0607 13:15:09.826889 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Stream added, broadcasting: 3 I0607 13:15:09.827868 6 log.go:172] (0xc0023da2c0) Reply frame received for 3 I0607 13:15:09.827917 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Create stream I0607 13:15:09.827932 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Stream added, broadcasting: 5 I0607 13:15:09.828968 6 log.go:172] (0xc0023da2c0) Reply frame received for 5 I0607 13:15:09.899647 6 log.go:172] (0xc0023da2c0) Data frame received for 3 I0607 13:15:09.899679 6 log.go:172] (0xc0025c0140) (3) Data frame handling I0607 13:15:09.899695 6 log.go:172] (0xc0025c0140) (3) Data frame sent I0607 13:15:09.900224 6 log.go:172] (0xc0023da2c0) Data frame received for 5 I0607 13:15:09.900246 6 log.go:172] (0xc0025c01e0) (5) Data frame handling I0607 13:15:09.900270 6 log.go:172] (0xc0023da2c0) Data frame received for 3 I0607 13:15:09.900294 6 log.go:172] (0xc0025c0140) (3) Data frame handling I0607 13:15:09.902184 6 log.go:172] (0xc0023da2c0) Data frame received for 1 I0607 13:15:09.902197 6 log.go:172] (0xc001ed14a0) (1) Data frame handling I0607 13:15:09.902208 6 log.go:172] (0xc001ed14a0) (1) Data frame sent I0607 13:15:09.902216 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream removed, broadcasting: 1 I0607 13:15:09.902231 6 log.go:172] (0xc0023da2c0) Go away received I0607 13:15:09.902412 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream removed, broadcasting: 1 I0607 13:15:09.902480 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Stream removed, broadcasting: 3 I0607 13:15:09.902500 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Stream removed, broadcasting: 5 Jun 7 13:15:09.902: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:15:09.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7394" for this suite. Jun 7 13:15:33.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:15:34.033: INFO: namespace pod-network-test-7394 deletion completed in 24.126954206s • [SLOW TEST:50.611 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:15:34.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-79836fea-aaa6-47b0-b2e3-38180aa91062 STEP: Creating a pod to test consume secrets Jun 7 13:15:34.097: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7" in namespace "projected-2638" to be "success or failure" Jun 7 13:15:34.101: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.41626ms Jun 7 13:15:36.105: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729245s Jun 7 13:15:38.109: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011649328s STEP: Saw pod success Jun 7 13:15:38.109: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7" satisfied condition "success or failure" Jun 7 13:15:38.112: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 container projected-secret-volume-test: STEP: delete the pod Jun 7 13:15:38.147: INFO: Waiting for pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 to disappear Jun 7 13:15:38.155: INFO: Pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:15:38.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2638" for this suite. Jun 7 13:15:44.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:15:44.546: INFO: namespace projected-2638 deletion completed in 6.387641443s • [SLOW TEST:10.511 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:15:44.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 7 13:15:48.794: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-201f32eb-2191-42de-aaeb-f52d438b4e17,GenerateName:,Namespace:events-1177,SelfLink:/api/v1/namespaces/events-1177/pods/send-events-201f32eb-2191-42de-aaeb-f52d438b4e17,UID:4fea7e7d-1e53-4dcf-9094-3cf011fe64de,ResourceVersion:15151985,Generation:0,CreationTimestamp:2020-06-07 13:15:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 774339735,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ltc5c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ltc5c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ltc5c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.153,StartTime:2020-06-07 13:15:44 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-07 13:15:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://86f3e0b8e60952b68748ad618a6bcf91495de971c55f8414ede3421a13ceae20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 7 13:15:50.799: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 7 13:15:52.803: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:15:52.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1177" for this suite. Jun 7 13:16:34.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:16:34.929: INFO: namespace events-1177 deletion completed in 42.108074877s • [SLOW TEST:50.383 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:16:34.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-6nng STEP: Creating a pod to test atomic-volume-subpath Jun 7 13:16:35.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6nng" in namespace "subpath-9612" to be "success or failure" Jun 7 13:16:35.024: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Pending", Reason="", readiness=false. Elapsed: 3.892039ms Jun 7 13:16:37.029: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008938453s Jun 7 13:16:39.034: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 4.013592635s Jun 7 13:16:41.038: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 6.018382338s Jun 7 13:16:43.043: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 8.023162241s Jun 7 13:16:45.047: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 10.027167028s Jun 7 13:16:47.051: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 12.03111321s Jun 7 13:16:49.055: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 14.035561219s Jun 7 13:16:51.060: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 16.039945226s Jun 7 13:16:53.066: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 18.045696111s Jun 7 13:16:55.070: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 20.049906999s Jun 7 13:16:57.074: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 22.053763287s Jun 7 13:16:59.196: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.175808078s STEP: Saw pod success Jun 7 13:16:59.196: INFO: Pod "pod-subpath-test-configmap-6nng" satisfied condition "success or failure" Jun 7 13:16:59.199: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-6nng container test-container-subpath-configmap-6nng: STEP: delete the pod Jun 7 13:16:59.218: INFO: Waiting for pod pod-subpath-test-configmap-6nng to disappear Jun 7 13:16:59.228: INFO: Pod pod-subpath-test-configmap-6nng no longer exists STEP: Deleting pod pod-subpath-test-configmap-6nng Jun 7 13:16:59.228: INFO: Deleting pod "pod-subpath-test-configmap-6nng" in namespace "subpath-9612" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:16:59.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9612" for this suite. Jun 7 13:17:05.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:17:05.350: INFO: namespace subpath-9612 deletion completed in 6.095755045s • [SLOW TEST:30.421 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:17:05.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 13:17:05.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4476' Jun 7 13:17:05.499: INFO: stderr: "" Jun 7 13:17:05.499: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 7 13:17:05.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4476' Jun 7 13:17:12.173: INFO: stderr: "" Jun 7 13:17:12.173: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:17:12.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4476" for this suite. Jun 7 13:17:18.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:17:18.274: INFO: namespace kubectl-4476 deletion completed in 6.098041197s • [SLOW TEST:12.924 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:17:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 7 13:17:28.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.402: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.439268 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Create stream I0607 13:17:28.439297 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream added, broadcasting: 1 I0607 13:17:28.441467 6 log.go:172] (0xc002b6c9a0) Reply frame received for 1 I0607 13:17:28.441603 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Create stream I0607 13:17:28.441618 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Stream added, broadcasting: 3 I0607 13:17:28.442826 6 log.go:172] (0xc002b6c9a0) Reply frame received for 3 I0607 13:17:28.442892 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Create stream I0607 13:17:28.442919 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Stream added, broadcasting: 5 I0607 13:17:28.443975 6 log.go:172] (0xc002b6c9a0) Reply frame received for 5 I0607 13:17:28.504970 6 log.go:172] (0xc002b6c9a0) Data frame received for 5 I0607 13:17:28.504999 6 log.go:172] (0xc002ac8dc0) (5) Data frame handling I0607 13:17:28.505033 6 log.go:172] (0xc002b6c9a0) Data frame received for 3 I0607 13:17:28.505076 6 log.go:172] (0xc0017b4140) (3) Data frame handling I0607 13:17:28.505311 6 log.go:172] (0xc0017b4140) (3) Data frame sent I0607 13:17:28.505342 6 log.go:172] (0xc002b6c9a0) Data frame received for 3 I0607 13:17:28.505358 6 log.go:172] (0xc0017b4140) (3) Data frame handling I0607 13:17:28.506754 6 log.go:172] (0xc002b6c9a0) Data frame received for 1 I0607 13:17:28.506770 6 log.go:172] (0xc002ac8d20) (1) Data frame handling I0607 13:17:28.506779 6 log.go:172] (0xc002ac8d20) (1) Data frame sent I0607 13:17:28.506792 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream removed, broadcasting: 1 I0607 13:17:28.506874 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream removed, broadcasting: 1 I0607 13:17:28.506886 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Stream removed, broadcasting: 3 I0607 13:17:28.506945 6 log.go:172] (0xc002b6c9a0) Go away received I0607 13:17:28.507053 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Stream removed, broadcasting: 5 Jun 7 13:17:28.507: INFO: Exec stderr: "" Jun 7 13:17:28.507: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.507: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.540131 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Create stream I0607 13:17:28.540163 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream added, broadcasting: 1 I0607 13:17:28.542182 6 log.go:172] (0xc002b6dad0) Reply frame received for 1 I0607 13:17:28.542251 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Create stream I0607 13:17:28.542279 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Stream added, broadcasting: 3 I0607 13:17:28.543414 6 log.go:172] (0xc002b6dad0) Reply frame received for 3 I0607 13:17:28.543448 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Create stream I0607 13:17:28.543467 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Stream added, broadcasting: 5 I0607 13:17:28.544489 6 log.go:172] (0xc002b6dad0) Reply frame received for 5 I0607 13:17:28.604105 6 log.go:172] (0xc002b6dad0) Data frame received for 5 I0607 13:17:28.604218 6 log.go:172] (0xc002ac9180) (5) Data frame handling I0607 13:17:28.604256 6 log.go:172] (0xc002b6dad0) Data frame received for 3 I0607 13:17:28.604285 6 log.go:172] (0xc001acb360) (3) Data frame handling I0607 13:17:28.604316 6 log.go:172] (0xc001acb360) (3) Data frame sent I0607 13:17:28.604334 6 log.go:172] (0xc002b6dad0) Data frame received for 3 I0607 13:17:28.604347 6 log.go:172] (0xc001acb360) (3) Data frame handling I0607 13:17:28.606307 6 log.go:172] (0xc002b6dad0) Data frame received for 1 I0607 13:17:28.606347 6 log.go:172] (0xc002ac90e0) (1) Data frame handling I0607 13:17:28.606367 6 log.go:172] (0xc002ac90e0) (1) Data frame sent I0607 13:17:28.606388 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream removed, broadcasting: 1 I0607 13:17:28.606415 6 log.go:172] (0xc002b6dad0) Go away received I0607 13:17:28.606603 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream removed, broadcasting: 1 I0607 13:17:28.606644 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Stream removed, broadcasting: 3 I0607 13:17:28.606658 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Stream removed, broadcasting: 5 Jun 7 13:17:28.606: INFO: Exec stderr: "" Jun 7 13:17:28.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.606: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.640543 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Create stream I0607 13:17:28.640569 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream added, broadcasting: 1 I0607 13:17:28.646496 6 log.go:172] (0xc0025ce580) Reply frame received for 1 I0607 13:17:28.646558 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Create stream I0607 13:17:28.646576 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Stream added, broadcasting: 3 I0607 13:17:28.648214 6 log.go:172] (0xc0025ce580) Reply frame received for 3 I0607 13:17:28.648255 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Create stream I0607 13:17:28.648270 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Stream added, broadcasting: 5 I0607 13:17:28.650035 6 log.go:172] (0xc0025ce580) Reply frame received for 5 I0607 13:17:28.723871 6 log.go:172] (0xc0025ce580) Data frame received for 3 I0607 13:17:28.723921 6 log.go:172] (0xc002ac9540) (3) Data frame handling I0607 13:17:28.723952 6 log.go:172] (0xc002ac9540) (3) Data frame sent I0607 13:17:28.724226 6 log.go:172] (0xc0025ce580) Data frame received for 5 I0607 13:17:28.724256 6 log.go:172] (0xc002ac95e0) (5) Data frame handling I0607 13:17:28.724289 6 log.go:172] (0xc0025ce580) Data frame received for 3 I0607 13:17:28.724307 6 log.go:172] (0xc002ac9540) (3) Data frame handling I0607 13:17:28.725516 6 log.go:172] (0xc0025ce580) Data frame received for 1 I0607 13:17:28.725542 6 log.go:172] (0xc002ac94a0) (1) Data frame handling I0607 13:17:28.725579 6 log.go:172] (0xc002ac94a0) (1) Data frame sent I0607 13:17:28.725742 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream removed, broadcasting: 1 I0607 13:17:28.725856 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream removed, broadcasting: 1 I0607 13:17:28.725877 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Stream removed, broadcasting: 3 I0607 13:17:28.725891 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Stream removed, broadcasting: 5 Jun 7 13:17:28.725: INFO: Exec stderr: "" I0607 13:17:28.725924 6 log.go:172] (0xc0025ce580) Go away received Jun 7 13:17:28.725: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.726: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.791115 6 log.go:172] (0xc002ac0210) (0xc001acb680) Create stream I0607 13:17:28.791147 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream added, broadcasting: 1 I0607 13:17:28.793749 6 log.go:172] (0xc002ac0210) Reply frame received for 1 I0607 13:17:28.793819 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Create stream I0607 13:17:28.793848 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Stream added, broadcasting: 3 I0607 13:17:28.794989 6 log.go:172] (0xc002ac0210) Reply frame received for 3 I0607 13:17:28.795017 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Create stream I0607 13:17:28.795026 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Stream added, broadcasting: 5 I0607 13:17:28.796057 6 log.go:172] (0xc002ac0210) Reply frame received for 5 I0607 13:17:28.856508 6 log.go:172] (0xc002ac0210) Data frame received for 5 I0607 13:17:28.856679 6 log.go:172] (0xc0025c1860) (5) Data frame handling I0607 13:17:28.856765 6 log.go:172] (0xc002ac0210) Data frame received for 3 I0607 13:17:28.856794 6 log.go:172] (0xc001d7fe00) (3) Data frame handling I0607 13:17:28.856973 6 log.go:172] (0xc001d7fe00) (3) Data frame sent I0607 13:17:28.856986 6 log.go:172] (0xc002ac0210) Data frame received for 3 I0607 13:17:28.856995 6 log.go:172] (0xc001d7fe00) (3) Data frame handling I0607 13:17:28.858075 6 log.go:172] (0xc002ac0210) Data frame received for 1 I0607 13:17:28.858095 6 log.go:172] (0xc001acb680) (1) Data frame handling I0607 13:17:28.858109 6 log.go:172] (0xc001acb680) (1) Data frame sent I0607 13:17:28.858122 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream removed, broadcasting: 1 I0607 13:17:28.858131 6 log.go:172] (0xc002ac0210) Go away received I0607 13:17:28.858216 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream removed, broadcasting: 1 I0607 13:17:28.858234 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Stream removed, broadcasting: 3 I0607 13:17:28.858243 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Stream removed, broadcasting: 5 Jun 7 13:17:28.858: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 7 13:17:28.858: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.858: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.886098 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Create stream I0607 13:17:28.886138 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream added, broadcasting: 1 I0607 13:17:28.888363 6 log.go:172] (0xc0025cf4a0) Reply frame received for 1 I0607 13:17:28.888397 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Create stream I0607 13:17:28.888409 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Stream added, broadcasting: 3 I0607 13:17:28.889412 6 log.go:172] (0xc0025cf4a0) Reply frame received for 3 I0607 13:17:28.889447 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Create stream I0607 13:17:28.889459 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Stream added, broadcasting: 5 I0607 13:17:28.890153 6 log.go:172] (0xc0025cf4a0) Reply frame received for 5 I0607 13:17:28.961808 6 log.go:172] (0xc0025cf4a0) Data frame received for 3 I0607 13:17:28.961844 6 log.go:172] (0xc001d7fea0) (3) Data frame handling I0607 13:17:28.961865 6 log.go:172] (0xc001d7fea0) (3) Data frame sent I0607 13:17:28.961879 6 log.go:172] (0xc0025cf4a0) Data frame received for 3 I0607 13:17:28.961891 6 log.go:172] (0xc001d7fea0) (3) Data frame handling I0607 13:17:28.961947 6 log.go:172] (0xc0025cf4a0) Data frame received for 5 I0607 13:17:28.961966 6 log.go:172] (0xc002ac99a0) (5) Data frame handling I0607 13:17:28.963450 6 log.go:172] (0xc0025cf4a0) Data frame received for 1 I0607 13:17:28.963473 6 log.go:172] (0xc002ac9900) (1) Data frame handling I0607 13:17:28.963506 6 log.go:172] (0xc002ac9900) (1) Data frame sent I0607 13:17:28.963540 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream removed, broadcasting: 1 I0607 13:17:28.963561 6 log.go:172] (0xc0025cf4a0) Go away received I0607 13:17:28.963774 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream removed, broadcasting: 1 I0607 13:17:28.963797 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Stream removed, broadcasting: 3 I0607 13:17:28.963809 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Stream removed, broadcasting: 5 Jun 7 13:17:28.963: INFO: Exec stderr: "" Jun 7 13:17:28.963: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:28.963: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:28.996957 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Create stream I0607 13:17:28.996989 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream added, broadcasting: 1 I0607 13:17:28.999423 6 log.go:172] (0xc002dd7970) Reply frame received for 1 I0607 13:17:28.999469 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Create stream I0607 13:17:28.999485 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Stream added, broadcasting: 3 I0607 13:17:29.000427 6 log.go:172] (0xc002dd7970) Reply frame received for 3 I0607 13:17:29.000453 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Create stream I0607 13:17:29.000467 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Stream added, broadcasting: 5 I0607 13:17:29.001735 6 log.go:172] (0xc002dd7970) Reply frame received for 5 I0607 13:17:29.070071 6 log.go:172] (0xc002dd7970) Data frame received for 5 I0607 13:17:29.070105 6 log.go:172] (0xc0025c19a0) (5) Data frame handling I0607 13:17:29.070285 6 log.go:172] (0xc002dd7970) Data frame received for 3 I0607 13:17:29.070313 6 log.go:172] (0xc0025c1900) (3) Data frame handling I0607 13:17:29.070340 6 log.go:172] (0xc0025c1900) (3) Data frame sent I0607 13:17:29.070356 6 log.go:172] (0xc002dd7970) Data frame received for 3 I0607 13:17:29.070371 6 log.go:172] (0xc0025c1900) (3) Data frame handling I0607 13:17:29.071531 6 log.go:172] (0xc002dd7970) Data frame received for 1 I0607 13:17:29.071584 6 log.go:172] (0xc001f141e0) (1) Data frame handling I0607 13:17:29.071622 6 log.go:172] (0xc001f141e0) (1) Data frame sent I0607 13:17:29.071645 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream removed, broadcasting: 1 I0607 13:17:29.071664 6 log.go:172] (0xc002dd7970) Go away received I0607 13:17:29.071914 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream removed, broadcasting: 1 I0607 13:17:29.071946 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Stream removed, broadcasting: 3 I0607 13:17:29.071969 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Stream removed, broadcasting: 5 Jun 7 13:17:29.071: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 7 13:17:29.072: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:29.072: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:29.107427 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Create stream I0607 13:17:29.107452 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream added, broadcasting: 1 I0607 13:17:29.109782 6 log.go:172] (0xc0021de2c0) Reply frame received for 1 I0607 13:17:29.109844 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Create stream I0607 13:17:29.109910 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Stream added, broadcasting: 3 I0607 13:17:29.110900 6 log.go:172] (0xc0021de2c0) Reply frame received for 3 I0607 13:17:29.110938 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Create stream I0607 13:17:29.110953 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Stream added, broadcasting: 5 I0607 13:17:29.111860 6 log.go:172] (0xc0021de2c0) Reply frame received for 5 I0607 13:17:29.188070 6 log.go:172] (0xc0021de2c0) Data frame received for 5 I0607 13:17:29.188098 6 log.go:172] (0xc0017b41e0) (5) Data frame handling I0607 13:17:29.188131 6 log.go:172] (0xc0021de2c0) Data frame received for 3 I0607 13:17:29.188158 6 log.go:172] (0xc001acb860) (3) Data frame handling I0607 13:17:29.188182 6 log.go:172] (0xc001acb860) (3) Data frame sent I0607 13:17:29.188194 6 log.go:172] (0xc0021de2c0) Data frame received for 3 I0607 13:17:29.188206 6 log.go:172] (0xc001acb860) (3) Data frame handling I0607 13:17:29.189637 6 log.go:172] (0xc0021de2c0) Data frame received for 1 I0607 13:17:29.189664 6 log.go:172] (0xc002ac9cc0) (1) Data frame handling I0607 13:17:29.189698 6 log.go:172] (0xc002ac9cc0) (1) Data frame sent I0607 13:17:29.189798 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream removed, broadcasting: 1 I0607 13:17:29.189852 6 log.go:172] (0xc0021de2c0) Go away received I0607 13:17:29.189918 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream removed, broadcasting: 1 I0607 13:17:29.189936 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Stream removed, broadcasting: 3 I0607 13:17:29.189949 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Stream removed, broadcasting: 5 Jun 7 13:17:29.189: INFO: Exec stderr: "" Jun 7 13:17:29.189: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:29.190: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:29.224405 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Create stream I0607 13:17:29.224435 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream added, broadcasting: 1 I0607 13:17:29.226919 6 log.go:172] (0xc001d43a20) Reply frame received for 1 I0607 13:17:29.226958 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Create stream I0607 13:17:29.226974 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Stream added, broadcasting: 3 I0607 13:17:29.227953 6 log.go:172] (0xc001d43a20) Reply frame received for 3 I0607 13:17:29.227982 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Create stream I0607 13:17:29.227992 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Stream added, broadcasting: 5 I0607 13:17:29.228978 6 log.go:172] (0xc001d43a20) Reply frame received for 5 I0607 13:17:29.298718 6 log.go:172] (0xc001d43a20) Data frame received for 5 I0607 13:17:29.298749 6 log.go:172] (0xc0017b4b40) (5) Data frame handling I0607 13:17:29.298780 6 log.go:172] (0xc001d43a20) Data frame received for 3 I0607 13:17:29.298804 6 log.go:172] (0xc0017b4820) (3) Data frame handling I0607 13:17:29.298830 6 log.go:172] (0xc0017b4820) (3) Data frame sent I0607 13:17:29.298850 6 log.go:172] (0xc001d43a20) Data frame received for 3 I0607 13:17:29.298878 6 log.go:172] (0xc0017b4820) (3) Data frame handling I0607 13:17:29.300230 6 log.go:172] (0xc001d43a20) Data frame received for 1 I0607 13:17:29.300273 6 log.go:172] (0xc0017b4780) (1) Data frame handling I0607 13:17:29.300298 6 log.go:172] (0xc0017b4780) (1) Data frame sent I0607 13:17:29.300321 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream removed, broadcasting: 1 I0607 13:17:29.300345 6 log.go:172] (0xc001d43a20) Go away received I0607 13:17:29.300464 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream removed, broadcasting: 1 I0607 13:17:29.300488 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Stream removed, broadcasting: 3 I0607 13:17:29.300508 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Stream removed, broadcasting: 5 Jun 7 13:17:29.300: INFO: Exec stderr: "" Jun 7 13:17:29.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:29.300: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:29.326420 6 log.go:172] (0xc0021ded10) (0xc001218000) Create stream I0607 13:17:29.326447 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream added, broadcasting: 1 I0607 13:17:29.328682 6 log.go:172] (0xc0021ded10) Reply frame received for 1 I0607 13:17:29.328706 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Create stream I0607 13:17:29.328718 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Stream added, broadcasting: 3 I0607 13:17:29.329766 6 log.go:172] (0xc0021ded10) Reply frame received for 3 I0607 13:17:29.329818 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Create stream I0607 13:17:29.329832 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Stream added, broadcasting: 5 I0607 13:17:29.330839 6 log.go:172] (0xc0021ded10) Reply frame received for 5 I0607 13:17:29.398728 6 log.go:172] (0xc0021ded10) Data frame received for 5 I0607 13:17:29.398783 6 log.go:172] (0xc0025c1ae0) (5) Data frame handling I0607 13:17:29.398811 6 log.go:172] (0xc0021ded10) Data frame received for 3 I0607 13:17:29.398826 6 log.go:172] (0xc0025c1a40) (3) Data frame handling I0607 13:17:29.398843 6 log.go:172] (0xc0025c1a40) (3) Data frame sent I0607 13:17:29.398864 6 log.go:172] (0xc0021ded10) Data frame received for 3 I0607 13:17:29.398877 6 log.go:172] (0xc0025c1a40) (3) Data frame handling I0607 13:17:29.401579 6 log.go:172] (0xc0021ded10) Data frame received for 1 I0607 13:17:29.401617 6 log.go:172] (0xc001218000) (1) Data frame handling I0607 13:17:29.401664 6 log.go:172] (0xc001218000) (1) Data frame sent I0607 13:17:29.401692 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream removed, broadcasting: 1 I0607 13:17:29.401723 6 log.go:172] (0xc0021ded10) Go away received I0607 13:17:29.401927 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream removed, broadcasting: 1 I0607 13:17:29.401963 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Stream removed, broadcasting: 3 I0607 13:17:29.401991 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Stream removed, broadcasting: 5 Jun 7 13:17:29.402: INFO: Exec stderr: "" Jun 7 13:17:29.402: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 13:17:29.402: INFO: >>> kubeConfig: /root/.kube/config I0607 13:17:29.433852 6 log.go:172] (0xc000288630) (0xc001d7e000) Create stream I0607 13:17:29.433878 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream added, broadcasting: 1 I0607 13:17:29.435805 6 log.go:172] (0xc000288630) Reply frame received for 1 I0607 13:17:29.435840 6 log.go:172] (0xc000288630) (0xc001a8e000) Create stream I0607 13:17:29.435855 6 log.go:172] (0xc000288630) (0xc001a8e000) Stream added, broadcasting: 3 I0607 13:17:29.436616 6 log.go:172] (0xc000288630) Reply frame received for 3 I0607 13:17:29.436639 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Create stream I0607 13:17:29.436648 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Stream added, broadcasting: 5 I0607 13:17:29.437690 6 log.go:172] (0xc000288630) Reply frame received for 5 I0607 13:17:29.514648 6 log.go:172] (0xc000288630) Data frame received for 5 I0607 13:17:29.514689 6 log.go:172] (0xc001a8e0a0) (5) Data frame handling I0607 13:17:29.514720 6 log.go:172] (0xc000288630) Data frame received for 3 I0607 13:17:29.514740 6 log.go:172] (0xc001a8e000) (3) Data frame handling I0607 13:17:29.514760 6 log.go:172] (0xc001a8e000) (3) Data frame sent I0607 13:17:29.514780 6 log.go:172] (0xc000288630) Data frame received for 3 I0607 13:17:29.514795 6 log.go:172] (0xc001a8e000) (3) Data frame handling I0607 13:17:29.516753 6 log.go:172] (0xc000288630) Data frame received for 1 I0607 13:17:29.516782 6 log.go:172] (0xc001d7e000) (1) Data frame handling I0607 13:17:29.516795 6 log.go:172] (0xc001d7e000) (1) Data frame sent I0607 13:17:29.516819 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream removed, broadcasting: 1 I0607 13:17:29.516850 6 log.go:172] (0xc000288630) Go away received I0607 13:17:29.516971 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream removed, broadcasting: 1 I0607 13:17:29.516995 6 log.go:172] (0xc000288630) (0xc001a8e000) Stream removed, broadcasting: 3 I0607 13:17:29.517005 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Stream removed, broadcasting: 5 Jun 7 13:17:29.517: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:17:29.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6415" for this suite. Jun 7 13:18:09.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:18:09.603: INFO: namespace e2e-kubelet-etc-hosts-6415 deletion completed in 40.082276319s • [SLOW TEST:51.328 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:18:09.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 7 13:18:14.748: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:18:15.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1951" for this suite. Jun 7 13:18:37.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:18:37.920: INFO: namespace replicaset-1951 deletion completed in 22.097117762s • [SLOW TEST:28.317 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:18:37.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 7 13:18:42.599: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 7 13:18:45.867: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 7 13:18:46.071: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:18:46.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3190" for this suite. Jun 7 13:18:52.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:18:52.380: INFO: namespace svcaccounts-3190 deletion completed in 6.103410227s • [SLOW TEST:14.460 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:18:52.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 7 13:18:52.435: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:18:52.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7459" for this suite. Jun 7 13:18:58.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:18:58.625: INFO: namespace kubectl-7459 deletion completed in 6.093451624s • [SLOW TEST:6.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:18:58.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c9b4eeb7-1b68-4d26-b85b-b4a199841150 STEP: Creating a pod to test consume secrets Jun 7 13:18:58.724: INFO: Waiting up to 5m0s for pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532" in namespace "secrets-6504" to be "success or failure" Jun 7 13:18:58.726: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517713ms Jun 7 13:19:00.805: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081474213s Jun 7 13:19:02.809: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085534832s STEP: Saw pod success Jun 7 13:19:02.809: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532" satisfied condition "success or failure" Jun 7 13:19:02.812: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 container secret-volume-test: STEP: delete the pod Jun 7 13:19:02.854: INFO: Waiting for pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 to disappear Jun 7 13:19:02.858: INFO: Pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:19:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6504" for this suite. Jun 7 13:19:08.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:19:08.954: INFO: namespace secrets-6504 deletion completed in 6.092079417s • [SLOW TEST:10.329 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:19:08.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 in namespace container-probe-8992 Jun 7 13:19:13.070: INFO: Started pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 in namespace container-probe-8992 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 13:19:13.074: INFO: Initial restart count of pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:23:13.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8992" for this suite. Jun 7 13:23:19.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:23:19.964: INFO: namespace container-probe-8992 deletion completed in 6.113615575s • [SLOW TEST:251.011 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:23:19.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:23:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2821" for this suite. Jun 7 13:23:30.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:23:30.133: INFO: namespace kubelet-test-2821 deletion completed in 6.098969115s • [SLOW TEST:10.168 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:23:30.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-bec98581-a549-406e-8886-5e3b10b241fe in namespace container-probe-7425 Jun 7 13:23:34.251: INFO: Started pod busybox-bec98581-a549-406e-8886-5e3b10b241fe in namespace container-probe-7425 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 13:23:34.255: INFO: Initial restart count of pod busybox-bec98581-a549-406e-8886-5e3b10b241fe is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:27:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7425" for this suite. Jun 7 13:27:41.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:27:41.897: INFO: namespace container-probe-7425 deletion completed in 6.290522068s • [SLOW TEST:251.764 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:27:41.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 in namespace container-probe-7092 Jun 7 13:27:48.061: INFO: Started pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 in namespace container-probe-7092 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 13:27:48.064: INFO: Initial restart count of pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 is 0 Jun 7 13:28:10.447: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 1 (22.383703652s elapsed) Jun 7 13:28:28.522: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 2 (40.458100017s elapsed) Jun 7 13:28:50.646: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 3 (1m2.581926673s elapsed) Jun 7 13:29:08.732: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 4 (1m20.667882854s elapsed) Jun 7 13:30:11.314: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 5 (2m23.250619437s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:30:11.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7092" for this suite. Jun 7 13:30:17.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:30:17.513: INFO: namespace container-probe-7092 deletion completed in 6.130736515s • [SLOW TEST:155.615 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:30:17.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-41ad1cc4-df0a-4196-ae80-583a2f1d571f STEP: Creating configMap with name cm-test-opt-upd-15a36a1c-4c22-4d91-91f0-e266eec46df3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-41ad1cc4-df0a-4196-ae80-583a2f1d571f STEP: Updating configmap cm-test-opt-upd-15a36a1c-4c22-4d91-91f0-e266eec46df3 STEP: Creating configMap with name cm-test-opt-create-311dd219-5a30-416e-b5a3-e06b45c260e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:31:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7006" for this suite. Jun 7 13:32:00.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:32:00.635: INFO: namespace configmap-7006 deletion completed in 24.161527619s • [SLOW TEST:103.122 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:32:00.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 7 13:32:00.885: INFO: Waiting up to 5m0s for pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d" in namespace "emptydir-5407" to be "success or failure" Jun 7 13:32:00.912: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.233616ms Jun 7 13:32:02.917: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031332447s Jun 7 13:32:04.921: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036010665s Jun 7 13:32:06.964: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079176353s STEP: Saw pod success Jun 7 13:32:06.964: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d" satisfied condition "success or failure" Jun 7 13:32:06.968: INFO: Trying to get logs from node iruya-worker pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d container test-container: STEP: delete the pod Jun 7 13:32:07.016: INFO: Waiting for pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d to disappear Jun 7 13:32:07.026: INFO: Pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:32:07.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5407" for this suite. Jun 7 13:32:13.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:32:13.190: INFO: namespace emptydir-5407 deletion completed in 6.161175439s • [SLOW TEST:12.554 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:32:13.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 13:32:13.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d" in namespace "downward-api-4527" to be "success or failure" Jun 7 13:32:13.371: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.090916ms Jun 7 13:32:15.374: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048656596s Jun 7 13:32:17.378: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052571316s Jun 7 13:32:19.383: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057138036s STEP: Saw pod success Jun 7 13:32:19.383: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d" satisfied condition "success or failure" Jun 7 13:32:19.386: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d container client-container: STEP: delete the pod Jun 7 13:32:19.531: INFO: Waiting for pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d to disappear Jun 7 13:32:19.695: INFO: Pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:32:19.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4527" for this suite. Jun 7 13:32:25.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:32:25.803: INFO: namespace downward-api-4527 deletion completed in 6.104317073s • [SLOW TEST:12.613 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:32:25.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6139586d-a14c-41d7-ad50-f87568471026 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6139586d-a14c-41d7-ad50-f87568471026 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:33:42.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4011" for this suite. Jun 7 13:34:06.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:34:06.113: INFO: namespace projected-4011 deletion completed in 24.108701582s • [SLOW TEST:100.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:34:06.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3886 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3886 STEP: Creating statefulset with conflicting port in namespace statefulset-3886 STEP: Waiting until pod test-pod will start running in namespace statefulset-3886 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3886 Jun 7 13:34:12.465: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Pending. Waiting for statefulset controller to delete. Jun 7 13:34:12.575: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Failed. Waiting for statefulset controller to delete. Jun 7 13:34:12.592: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Failed. Waiting for statefulset controller to delete. Jun 7 13:34:12.670: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3886 STEP: Removing pod with conflicting port in namespace statefulset-3886 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3886 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 7 13:34:18.864: INFO: Deleting all statefulset in ns statefulset-3886 Jun 7 13:34:18.867: INFO: Scaling statefulset ss to 0 Jun 7 13:34:28.898: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:34:28.900: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:34:29.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3886" for this suite. Jun 7 13:34:37.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:34:37.251: INFO: namespace statefulset-3886 deletion completed in 8.231370696s • [SLOW TEST:31.138 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:34:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-dc3c4960-0091-4a0b-94c4-0c8f0a28e682 STEP: Creating a pod to test consume configMaps Jun 7 13:34:37.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b" in namespace "configmap-8622" to be "success or failure" Jun 7 13:34:37.992: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.900428ms Jun 7 13:34:40.029: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039482652s Jun 7 13:34:42.188: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198840671s Jun 7 13:34:44.192: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202467039s STEP: Saw pod success Jun 7 13:34:44.192: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b" satisfied condition "success or failure" Jun 7 13:34:44.195: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b container configmap-volume-test: STEP: delete the pod Jun 7 13:34:44.245: INFO: Waiting for pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b to disappear Jun 7 13:34:44.268: INFO: Pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:34:44.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8622" for this suite. Jun 7 13:34:50.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:34:50.445: INFO: namespace configmap-8622 deletion completed in 6.172643069s • [SLOW TEST:13.193 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:34:50.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 13:34:50.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6" in namespace "projected-3091" to be "success or failure" Jun 7 13:34:50.599: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.01838ms Jun 7 13:34:52.604: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02741891s Jun 7 13:34:54.624: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047688217s Jun 7 13:34:56.628: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051974473s STEP: Saw pod success Jun 7 13:34:56.628: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6" satisfied condition "success or failure" Jun 7 13:34:56.632: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 container client-container: STEP: delete the pod Jun 7 13:34:56.917: INFO: Waiting for pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 to disappear Jun 7 13:34:56.929: INFO: Pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:34:56.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3091" for this suite. Jun 7 13:35:03.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:35:03.218: INFO: namespace projected-3091 deletion completed in 6.135923121s • [SLOW TEST:12.773 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:35:03.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 7 13:35:03.357: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 7 13:35:04.494: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 7 13:35:07.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:35:09.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:35:11.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 13:35:14.080: INFO: Waited 632.862701ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:35:15.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4358" for this suite. Jun 7 13:35:21.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:35:21.528: INFO: namespace aggregator-4358 deletion completed in 6.298890554s • [SLOW TEST:18.310 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:35:21.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 7 13:35:21.654: INFO: Waiting up to 5m0s for pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed" in namespace "emptydir-7968" to be "success or failure" Jun 7 13:35:21.664: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.749691ms Jun 7 13:35:23.763: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109136269s Jun 7 13:35:25.767: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113147877s Jun 7 13:35:27.771: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116957099s STEP: Saw pod success Jun 7 13:35:27.771: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed" satisfied condition "success or failure" Jun 7 13:35:27.774: INFO: Trying to get logs from node iruya-worker pod pod-525f3694-175e-44ee-a0b7-abf62b757bed container test-container: STEP: delete the pod Jun 7 13:35:27.849: INFO: Waiting for pod pod-525f3694-175e-44ee-a0b7-abf62b757bed to disappear Jun 7 13:35:27.912: INFO: Pod pod-525f3694-175e-44ee-a0b7-abf62b757bed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:35:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7968" for this suite. Jun 7 13:35:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:35:36.165: INFO: namespace emptydir-7968 deletion completed in 8.248455123s • [SLOW TEST:14.636 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:35:36.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 7 13:35:36.394: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155040,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 13:35:36.394: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155040,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 7 13:35:46.408: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155060,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 7 13:35:46.408: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155060,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 7 13:35:56.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155079,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 13:35:56.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155079,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 7 13:36:06.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155099,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 13:36:06.424: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155099,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 7 13:36:16.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155119,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 13:36:16.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155119,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 7 13:36:26.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155140,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 13:36:26.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155140,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:36:36.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-748" for this suite. Jun 7 13:36:42.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:36:42.581: INFO: namespace watch-748 deletion completed in 6.137619415s • [SLOW TEST:66.415 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:36:42.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 13:36:43.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3591' Jun 7 13:36:46.790: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 13:36:46.790: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 7 13:36:46.803: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 7 13:36:46.878: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 7 13:36:46.886: INFO: scanned /root for discovery docs: Jun 7 13:36:46.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3591' Jun 7 13:37:03.127: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 7 13:37:03.127: INFO: stdout: "Created e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb\nScaling up e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 7 13:37:03.127: INFO: stdout: "Created e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb\nScaling up e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 7 13:37:03.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3591' Jun 7 13:37:03.216: INFO: stderr: "" Jun 7 13:37:03.216: INFO: stdout: "e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 " Jun 7 13:37:03.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3591' Jun 7 13:37:03.382: INFO: stderr: "" Jun 7 13:37:03.382: INFO: stdout: "true" Jun 7 13:37:03.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3591' Jun 7 13:37:03.480: INFO: stderr: "" Jun 7 13:37:03.480: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 7 13:37:03.480: INFO: e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 7 13:37:03.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3591' Jun 7 13:37:03.620: INFO: stderr: "" Jun 7 13:37:03.621: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:37:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3591" for this suite. Jun 7 13:37:27.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:37:27.957: INFO: namespace kubectl-3591 deletion completed in 24.284924498s • [SLOW TEST:45.376 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:37:27.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9899 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9899 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9899 Jun 7 13:37:28.142: INFO: Found 0 stateful pods, waiting for 1 Jun 7 13:37:38.149: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 7 13:37:38.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:37:38.719: INFO: stderr: "I0607 13:37:38.282983 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Create stream\nI0607 13:37:38.283036 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream added, broadcasting: 1\nI0607 13:37:38.285501 729 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0607 13:37:38.285552 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Create stream\nI0607 13:37:38.285576 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Stream added, broadcasting: 3\nI0607 13:37:38.286691 729 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0607 13:37:38.286738 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Create stream\nI0607 13:37:38.286771 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Stream added, broadcasting: 5\nI0607 13:37:38.287742 729 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0607 13:37:38.374092 729 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:37:38.374124 729 log.go:172] (0xc0001fc8c0) (5) Data frame handling\nI0607 13:37:38.374144 729 log.go:172] (0xc0001fc8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:37:38.708943 729 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:37:38.709001 729 log.go:172] (0xc0006e6000) (3) Data frame handling\nI0607 13:37:38.709028 729 log.go:172] (0xc0006e6000) (3) Data frame sent\nI0607 13:37:38.709047 729 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:37:38.709064 729 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:37:38.709078 729 log.go:172] (0xc0001fc8c0) (5) Data frame handling\nI0607 13:37:38.709269 729 log.go:172] (0xc0006e6000) (3) Data frame handling\nI0607 13:37:38.711345 729 log.go:172] (0xc000116dc0) Data frame received for 1\nI0607 13:37:38.711366 729 log.go:172] (0xc0001fc820) (1) Data frame handling\nI0607 13:37:38.711384 729 log.go:172] (0xc0001fc820) (1) Data frame sent\nI0607 13:37:38.711398 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream removed, broadcasting: 1\nI0607 13:37:38.711509 729 log.go:172] (0xc000116dc0) Go away received\nI0607 13:37:38.711669 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream removed, broadcasting: 1\nI0607 13:37:38.711685 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Stream removed, broadcasting: 3\nI0607 13:37:38.711694 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Stream removed, broadcasting: 5\n" Jun 7 13:37:38.719: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:37:38.719: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:37:38.723: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 7 13:37:48.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:37:48.742: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:37:48.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999385s Jun 7 13:37:49.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.966490487s Jun 7 13:37:50.796: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961905394s Jun 7 13:37:51.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.956968059s Jun 7 13:37:52.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951345836s Jun 7 13:37:53.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.947498855s Jun 7 13:37:54.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.942674688s Jun 7 13:37:55.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.93828567s Jun 7 13:37:56.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933892142s Jun 7 13:37:57.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 929.930262ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9899 Jun 7 13:37:58.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:37:59.594: INFO: stderr: "I0607 13:37:59.482521 751 log.go:172] (0xc000a12420) (0xc0007768c0) Create stream\nI0607 13:37:59.482588 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream added, broadcasting: 1\nI0607 13:37:59.486138 751 log.go:172] (0xc000a12420) Reply frame received for 1\nI0607 13:37:59.486172 751 log.go:172] (0xc000a12420) (0xc000612320) Create stream\nI0607 13:37:59.486182 751 log.go:172] (0xc000a12420) (0xc000612320) Stream added, broadcasting: 3\nI0607 13:37:59.487080 751 log.go:172] (0xc000a12420) Reply frame received for 3\nI0607 13:37:59.487131 751 log.go:172] (0xc000a12420) (0xc000776000) Create stream\nI0607 13:37:59.487155 751 log.go:172] (0xc000a12420) (0xc000776000) Stream added, broadcasting: 5\nI0607 13:37:59.488043 751 log.go:172] (0xc000a12420) Reply frame received for 5\nI0607 13:37:59.586799 751 log.go:172] (0xc000a12420) Data frame received for 3\nI0607 13:37:59.586842 751 log.go:172] (0xc000612320) (3) Data frame handling\nI0607 13:37:59.586855 751 log.go:172] (0xc000612320) (3) Data frame sent\nI0607 13:37:59.586868 751 log.go:172] (0xc000a12420) Data frame received for 3\nI0607 13:37:59.586882 751 log.go:172] (0xc000612320) (3) Data frame handling\nI0607 13:37:59.586920 751 log.go:172] (0xc000a12420) Data frame received for 5\nI0607 13:37:59.586945 751 log.go:172] (0xc000776000) (5) Data frame handling\nI0607 13:37:59.586971 751 log.go:172] (0xc000776000) (5) Data frame sent\nI0607 13:37:59.586988 751 log.go:172] (0xc000a12420) Data frame received for 5\nI0607 13:37:59.587015 751 log.go:172] (0xc000776000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:37:59.588373 751 log.go:172] (0xc000a12420) Data frame received for 1\nI0607 13:37:59.588418 751 log.go:172] (0xc0007768c0) (1) Data frame handling\nI0607 13:37:59.588448 751 log.go:172] (0xc0007768c0) (1) Data frame sent\nI0607 13:37:59.588656 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream removed, broadcasting: 1\nI0607 13:37:59.589052 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream removed, broadcasting: 1\nI0607 13:37:59.589075 751 log.go:172] (0xc000a12420) (0xc000612320) Stream removed, broadcasting: 3\nI0607 13:37:59.589084 751 log.go:172] (0xc000a12420) (0xc000776000) Stream removed, broadcasting: 5\n" Jun 7 13:37:59.594: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:37:59.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:37:59.598: INFO: Found 1 stateful pods, waiting for 3 Jun 7 13:38:09.602: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:38:09.602: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:38:09.602: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 7 13:38:19.604: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:38:19.604: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:38:19.604: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 7 13:38:19.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:38:19.830: INFO: stderr: "I0607 13:38:19.734258 770 log.go:172] (0xc0009280b0) (0xc00090a640) Create stream\nI0607 13:38:19.734309 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream added, broadcasting: 1\nI0607 13:38:19.736729 770 log.go:172] (0xc0009280b0) Reply frame received for 1\nI0607 13:38:19.736763 770 log.go:172] (0xc0009280b0) (0xc000998000) Create stream\nI0607 13:38:19.736773 770 log.go:172] (0xc0009280b0) (0xc000998000) Stream added, broadcasting: 3\nI0607 13:38:19.737771 770 log.go:172] (0xc0009280b0) Reply frame received for 3\nI0607 13:38:19.737794 770 log.go:172] (0xc0009280b0) (0xc000626280) Create stream\nI0607 13:38:19.737802 770 log.go:172] (0xc0009280b0) (0xc000626280) Stream added, broadcasting: 5\nI0607 13:38:19.738643 770 log.go:172] (0xc0009280b0) Reply frame received for 5\nI0607 13:38:19.821840 770 log.go:172] (0xc0009280b0) Data frame received for 3\nI0607 13:38:19.821874 770 log.go:172] (0xc000998000) (3) Data frame handling\nI0607 13:38:19.821886 770 log.go:172] (0xc000998000) (3) Data frame sent\nI0607 13:38:19.821909 770 log.go:172] (0xc0009280b0) Data frame received for 5\nI0607 13:38:19.821917 770 log.go:172] (0xc000626280) (5) Data frame handling\nI0607 13:38:19.821925 770 log.go:172] (0xc000626280) (5) Data frame sent\nI0607 13:38:19.821933 770 log.go:172] (0xc0009280b0) Data frame received for 5\nI0607 13:38:19.821939 770 log.go:172] (0xc000626280) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:19.822070 770 log.go:172] (0xc0009280b0) Data frame received for 3\nI0607 13:38:19.822093 770 log.go:172] (0xc000998000) (3) Data frame handling\nI0607 13:38:19.823654 770 log.go:172] (0xc0009280b0) Data frame received for 1\nI0607 13:38:19.823679 770 log.go:172] (0xc00090a640) (1) Data frame handling\nI0607 13:38:19.823694 770 log.go:172] (0xc00090a640) (1) Data frame sent\nI0607 13:38:19.823723 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream removed, broadcasting: 1\nI0607 13:38:19.823754 770 log.go:172] (0xc0009280b0) Go away received\nI0607 13:38:19.824069 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream removed, broadcasting: 1\nI0607 13:38:19.824085 770 log.go:172] (0xc0009280b0) (0xc000998000) Stream removed, broadcasting: 3\nI0607 13:38:19.824096 770 log.go:172] (0xc0009280b0) (0xc000626280) Stream removed, broadcasting: 5\n" Jun 7 13:38:19.831: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:38:19.831: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:38:19.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:38:20.121: INFO: stderr: "I0607 13:38:19.956386 789 log.go:172] (0xc00096c420) (0xc000596820) Create stream\nI0607 13:38:19.956459 789 log.go:172] (0xc00096c420) (0xc000596820) Stream added, broadcasting: 1\nI0607 13:38:19.961457 789 log.go:172] (0xc00096c420) Reply frame received for 1\nI0607 13:38:19.961492 789 log.go:172] (0xc00096c420) (0xc0003021e0) Create stream\nI0607 13:38:19.961502 789 log.go:172] (0xc00096c420) (0xc0003021e0) Stream added, broadcasting: 3\nI0607 13:38:19.962339 789 log.go:172] (0xc00096c420) Reply frame received for 3\nI0607 13:38:19.962369 789 log.go:172] (0xc00096c420) (0xc000596000) Create stream\nI0607 13:38:19.962379 789 log.go:172] (0xc00096c420) (0xc000596000) Stream added, broadcasting: 5\nI0607 13:38:19.963054 789 log.go:172] (0xc00096c420) Reply frame received for 5\nI0607 13:38:20.075796 789 log.go:172] (0xc00096c420) Data frame received for 5\nI0607 13:38:20.075836 789 log.go:172] (0xc000596000) (5) Data frame handling\nI0607 13:38:20.075856 789 log.go:172] (0xc000596000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:20.110989 789 log.go:172] (0xc00096c420) Data frame received for 3\nI0607 13:38:20.111008 789 log.go:172] (0xc0003021e0) (3) Data frame handling\nI0607 13:38:20.111025 789 log.go:172] (0xc0003021e0) (3) Data frame sent\nI0607 13:38:20.111221 789 log.go:172] (0xc00096c420) Data frame received for 3\nI0607 13:38:20.111244 789 log.go:172] (0xc0003021e0) (3) Data frame handling\nI0607 13:38:20.111497 789 log.go:172] (0xc00096c420) Data frame received for 5\nI0607 13:38:20.111517 789 log.go:172] (0xc000596000) (5) Data frame handling\nI0607 13:38:20.114107 789 log.go:172] (0xc00096c420) Data frame received for 1\nI0607 13:38:20.114128 789 log.go:172] (0xc000596820) (1) Data frame handling\nI0607 13:38:20.114138 789 log.go:172] (0xc000596820) (1) Data frame sent\nI0607 13:38:20.114150 789 log.go:172] (0xc00096c420) (0xc000596820) Stream removed, broadcasting: 1\nI0607 13:38:20.114200 789 log.go:172] (0xc00096c420) Go away received\nI0607 13:38:20.114446 789 log.go:172] (0xc00096c420) (0xc000596820) Stream removed, broadcasting: 1\nI0607 13:38:20.114462 789 log.go:172] (0xc00096c420) (0xc0003021e0) Stream removed, broadcasting: 3\nI0607 13:38:20.114471 789 log.go:172] (0xc00096c420) (0xc000596000) Stream removed, broadcasting: 5\n" Jun 7 13:38:20.121: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:38:20.121: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:38:20.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:38:20.355: INFO: stderr: "I0607 13:38:20.243735 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Create stream\nI0607 13:38:20.243791 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream added, broadcasting: 1\nI0607 13:38:20.247789 810 log.go:172] (0xc000a36210) Reply frame received for 1\nI0607 13:38:20.247835 810 log.go:172] (0xc000a36210) (0xc00088a000) Create stream\nI0607 13:38:20.247846 810 log.go:172] (0xc000a36210) (0xc00088a000) Stream added, broadcasting: 3\nI0607 13:38:20.248749 810 log.go:172] (0xc000a36210) Reply frame received for 3\nI0607 13:38:20.248776 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Create stream\nI0607 13:38:20.248785 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Stream added, broadcasting: 5\nI0607 13:38:20.249507 810 log.go:172] (0xc000a36210) Reply frame received for 5\nI0607 13:38:20.300813 810 log.go:172] (0xc000a36210) Data frame received for 5\nI0607 13:38:20.300840 810 log.go:172] (0xc00088a0a0) (5) Data frame handling\nI0607 13:38:20.300859 810 log.go:172] (0xc00088a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:20.346664 810 log.go:172] (0xc000a36210) Data frame received for 3\nI0607 13:38:20.346710 810 log.go:172] (0xc00088a000) (3) Data frame handling\nI0607 13:38:20.346814 810 log.go:172] (0xc00088a000) (3) Data frame sent\nI0607 13:38:20.347180 810 log.go:172] (0xc000a36210) Data frame received for 5\nI0607 13:38:20.347208 810 log.go:172] (0xc00088a0a0) (5) Data frame handling\nI0607 13:38:20.347227 810 log.go:172] (0xc000a36210) Data frame received for 3\nI0607 13:38:20.347234 810 log.go:172] (0xc00088a000) (3) Data frame handling\nI0607 13:38:20.349507 810 log.go:172] (0xc000a36210) Data frame received for 1\nI0607 13:38:20.349528 810 log.go:172] (0xc0005fc3c0) (1) Data frame handling\nI0607 13:38:20.349553 810 log.go:172] (0xc0005fc3c0) (1) Data frame sent\nI0607 13:38:20.349574 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream removed, broadcasting: 1\nI0607 13:38:20.349604 810 log.go:172] (0xc000a36210) Go away received\nI0607 13:38:20.349911 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream removed, broadcasting: 1\nI0607 13:38:20.349924 810 log.go:172] (0xc000a36210) (0xc00088a000) Stream removed, broadcasting: 3\nI0607 13:38:20.349930 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Stream removed, broadcasting: 5\n" Jun 7 13:38:20.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:38:20.356: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:38:20.356: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:38:20.359: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 7 13:38:30.367: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:38:30.367: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:38:30.367: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:38:30.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999447s Jun 7 13:38:31.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.953371378s Jun 7 13:38:32.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.948132706s Jun 7 13:38:33.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.943597355s Jun 7 13:38:34.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938067997s Jun 7 13:38:35.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.932928584s Jun 7 13:38:36.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929361498s Jun 7 13:38:37.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923926385s Jun 7 13:38:38.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919586309s Jun 7 13:38:39.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 915.673182ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9899 Jun 7 13:38:40.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:38:40.702: INFO: stderr: "I0607 13:38:40.599845 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Create stream\nI0607 13:38:40.599911 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream added, broadcasting: 1\nI0607 13:38:40.602031 831 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0607 13:38:40.602086 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Create stream\nI0607 13:38:40.602111 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Stream added, broadcasting: 3\nI0607 13:38:40.602966 831 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0607 13:38:40.603009 831 log.go:172] (0xc000116dc0) (0xc00064c960) Create stream\nI0607 13:38:40.603024 831 log.go:172] (0xc000116dc0) (0xc00064c960) Stream added, broadcasting: 5\nI0607 13:38:40.603985 831 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0607 13:38:40.690467 831 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:38:40.690502 831 log.go:172] (0xc00064c960) (5) Data frame handling\nI0607 13:38:40.690526 831 log.go:172] (0xc00064c960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:40.694543 831 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:38:40.694577 831 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0607 13:38:40.694592 831 log.go:172] (0xc0008c0000) (3) Data frame sent\nI0607 13:38:40.694648 831 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:38:40.694790 831 log.go:172] (0xc00064c960) (5) Data frame handling\nI0607 13:38:40.694845 831 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:38:40.694872 831 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0607 13:38:40.696257 831 log.go:172] (0xc000116dc0) Data frame received for 1\nI0607 13:38:40.696276 831 log.go:172] (0xc00064c8c0) (1) Data frame handling\nI0607 13:38:40.696293 831 log.go:172] (0xc00064c8c0) (1) Data frame sent\nI0607 13:38:40.696378 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream removed, broadcasting: 1\nI0607 13:38:40.696491 831 log.go:172] (0xc000116dc0) Go away received\nI0607 13:38:40.696598 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream removed, broadcasting: 1\nI0607 13:38:40.696610 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Stream removed, broadcasting: 3\nI0607 13:38:40.696616 831 log.go:172] (0xc000116dc0) (0xc00064c960) Stream removed, broadcasting: 5\n" Jun 7 13:38:40.702: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:38:40.702: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:38:40.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:38:40.967: INFO: stderr: "I0607 13:38:40.886359 852 log.go:172] (0xc000448790) (0xc0007f2a00) Create stream\nI0607 13:38:40.886435 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream added, broadcasting: 1\nI0607 13:38:40.890778 852 log.go:172] (0xc000448790) Reply frame received for 1\nI0607 13:38:40.890841 852 log.go:172] (0xc000448790) (0xc0007f2000) Create stream\nI0607 13:38:40.890856 852 log.go:172] (0xc000448790) (0xc0007f2000) Stream added, broadcasting: 3\nI0607 13:38:40.891725 852 log.go:172] (0xc000448790) Reply frame received for 3\nI0607 13:38:40.891767 852 log.go:172] (0xc000448790) (0xc0007f20a0) Create stream\nI0607 13:38:40.891778 852 log.go:172] (0xc000448790) (0xc0007f20a0) Stream added, broadcasting: 5\nI0607 13:38:40.892732 852 log.go:172] (0xc000448790) Reply frame received for 5\nI0607 13:38:40.958850 852 log.go:172] (0xc000448790) Data frame received for 3\nI0607 13:38:40.958878 852 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0607 13:38:40.958899 852 log.go:172] (0xc0007f2000) (3) Data frame sent\nI0607 13:38:40.958905 852 log.go:172] (0xc000448790) Data frame received for 3\nI0607 13:38:40.958909 852 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0607 13:38:40.959016 852 log.go:172] (0xc000448790) Data frame received for 5\nI0607 13:38:40.959056 852 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0607 13:38:40.959082 852 log.go:172] (0xc0007f20a0) (5) Data frame sent\nI0607 13:38:40.959096 852 log.go:172] (0xc000448790) Data frame received for 5\nI0607 13:38:40.959110 852 log.go:172] (0xc0007f20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:40.960540 852 log.go:172] (0xc000448790) Data frame received for 1\nI0607 13:38:40.960554 852 log.go:172] (0xc0007f2a00) (1) Data frame handling\nI0607 13:38:40.960566 852 log.go:172] (0xc0007f2a00) (1) Data frame sent\nI0607 13:38:40.960575 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream removed, broadcasting: 1\nI0607 13:38:40.960585 852 log.go:172] (0xc000448790) Go away received\nI0607 13:38:40.960910 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream removed, broadcasting: 1\nI0607 13:38:40.960922 852 log.go:172] (0xc000448790) (0xc0007f2000) Stream removed, broadcasting: 3\nI0607 13:38:40.960927 852 log.go:172] (0xc000448790) (0xc0007f20a0) Stream removed, broadcasting: 5\n" Jun 7 13:38:40.967: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:38:40.967: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:38:40.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:38:41.204: INFO: stderr: "I0607 13:38:41.125832 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Create stream\nI0607 13:38:41.125902 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream added, broadcasting: 1\nI0607 13:38:41.129404 873 log.go:172] (0xc000a7a210) Reply frame received for 1\nI0607 13:38:41.129459 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Create stream\nI0607 13:38:41.129474 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Stream added, broadcasting: 3\nI0607 13:38:41.130479 873 log.go:172] (0xc000a7a210) Reply frame received for 3\nI0607 13:38:41.130512 873 log.go:172] (0xc000a7a210) (0xc0006de280) Create stream\nI0607 13:38:41.130521 873 log.go:172] (0xc000a7a210) (0xc0006de280) Stream added, broadcasting: 5\nI0607 13:38:41.131468 873 log.go:172] (0xc000a7a210) Reply frame received for 5\nI0607 13:38:41.198557 873 log.go:172] (0xc000a7a210) Data frame received for 3\nI0607 13:38:41.198588 873 log.go:172] (0xc0006bc0a0) (3) Data frame handling\nI0607 13:38:41.198606 873 log.go:172] (0xc0006bc0a0) (3) Data frame sent\nI0607 13:38:41.198614 873 log.go:172] (0xc000a7a210) Data frame received for 3\nI0607 13:38:41.198622 873 log.go:172] (0xc0006bc0a0) (3) Data frame handling\nI0607 13:38:41.198651 873 log.go:172] (0xc000a7a210) Data frame received for 5\nI0607 13:38:41.198658 873 log.go:172] (0xc0006de280) (5) Data frame handling\nI0607 13:38:41.198670 873 log.go:172] (0xc0006de280) (5) Data frame sent\nI0607 13:38:41.198677 873 log.go:172] (0xc000a7a210) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:41.198682 873 log.go:172] (0xc0006de280) (5) Data frame handling\nI0607 13:38:41.200041 873 log.go:172] (0xc000a7a210) Data frame received for 1\nI0607 13:38:41.200059 873 log.go:172] (0xc0006de1e0) (1) Data frame handling\nI0607 13:38:41.200071 873 log.go:172] (0xc0006de1e0) (1) Data frame sent\nI0607 13:38:41.200082 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0607 13:38:41.200119 873 log.go:172] (0xc000a7a210) Go away received\nI0607 13:38:41.200356 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0607 13:38:41.200371 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Stream removed, broadcasting: 3\nI0607 13:38:41.200377 873 log.go:172] (0xc000a7a210) (0xc0006de280) Stream removed, broadcasting: 5\n" Jun 7 13:38:41.205: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:38:41.205: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:38:41.205: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 7 13:39:11.222: INFO: Deleting all statefulset in ns statefulset-9899 Jun 7 13:39:11.224: INFO: Scaling statefulset ss to 0 Jun 7 13:39:11.232: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:39:11.234: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:39:11.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9899" for this suite. Jun 7 13:39:19.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:39:19.621: INFO: namespace statefulset-9899 deletion completed in 8.265820174s • [SLOW TEST:111.664 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:39:19.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-73bb5d80-9328-492b-ac8d-18121c74edf1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-73bb5d80-9328-492b-ac8d-18121c74edf1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:40:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8661" for this suite. Jun 7 13:41:05.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:41:05.951: INFO: namespace configmap-8661 deletion completed in 22.13203767s • [SLOW TEST:106.329 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:41:05.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 7 13:41:06.765: INFO: Waiting up to 5m0s for pod "pod-785ece4d-6601-402e-a19d-a245b14033bd" in namespace "emptydir-2938" to be "success or failure" Jun 7 13:41:06.819: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 53.896015ms Jun 7 13:41:08.824: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058758851s Jun 7 13:41:11.404: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.63906955s Jun 7 13:41:13.408: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.643599288s STEP: Saw pod success Jun 7 13:41:13.408: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd" satisfied condition "success or failure" Jun 7 13:41:13.412: INFO: Trying to get logs from node iruya-worker2 pod pod-785ece4d-6601-402e-a19d-a245b14033bd container test-container: STEP: delete the pod Jun 7 13:41:13.563: INFO: Waiting for pod pod-785ece4d-6601-402e-a19d-a245b14033bd to disappear Jun 7 13:41:13.596: INFO: Pod pod-785ece4d-6601-402e-a19d-a245b14033bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:41:13.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2938" for this suite. Jun 7 13:41:19.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:41:19.856: INFO: namespace emptydir-2938 deletion completed in 6.256105574s • [SLOW TEST:13.904 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:41:19.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 13:41:19.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53" in namespace "projected-1917" to be "success or failure" Jun 7 13:41:20.022: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 36.596671ms Jun 7 13:41:22.026: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040615354s Jun 7 13:41:24.122: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137051837s Jun 7 13:41:26.126: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140993254s Jun 7 13:41:28.140: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154894539s STEP: Saw pod success Jun 7 13:41:28.140: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53" satisfied condition "success or failure" Jun 7 13:41:28.143: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 container client-container: STEP: delete the pod Jun 7 13:41:28.204: INFO: Waiting for pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 to disappear Jun 7 13:41:28.314: INFO: Pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:41:28.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1917" for this suite. Jun 7 13:41:34.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:41:34.440: INFO: namespace projected-1917 deletion completed in 6.121328893s • [SLOW TEST:14.584 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:41:34.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:41:40.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9174" for this suite. Jun 7 13:41:47.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:41:47.083: INFO: namespace emptydir-wrapper-9174 deletion completed in 6.207188114s • [SLOW TEST:12.643 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:41:47.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-49t2 STEP: Creating a pod to test atomic-volume-subpath Jun 7 13:41:47.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-49t2" in namespace "subpath-2429" to be "success or failure" Jun 7 13:41:47.936: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.152666ms Jun 7 13:41:50.009: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102627131s Jun 7 13:41:52.063: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156890923s Jun 7 13:41:54.068: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161124929s Jun 7 13:41:56.072: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 8.16578035s Jun 7 13:41:58.076: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 10.169446504s Jun 7 13:42:00.079: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 12.173092995s Jun 7 13:42:02.084: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 14.177539951s Jun 7 13:42:04.088: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 16.181778182s Jun 7 13:42:06.249: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 18.342141324s Jun 7 13:42:08.253: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 20.346704032s Jun 7 13:42:10.258: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 22.351126437s Jun 7 13:42:12.262: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 24.355604628s Jun 7 13:42:14.267: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 26.360688204s Jun 7 13:42:16.271: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 28.365058531s Jun 7 13:42:18.275: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.368963545s STEP: Saw pod success Jun 7 13:42:18.275: INFO: Pod "pod-subpath-test-projected-49t2" satisfied condition "success or failure" Jun 7 13:42:18.278: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-49t2 container test-container-subpath-projected-49t2: STEP: delete the pod Jun 7 13:42:18.305: INFO: Waiting for pod pod-subpath-test-projected-49t2 to disappear Jun 7 13:42:18.322: INFO: Pod pod-subpath-test-projected-49t2 no longer exists STEP: Deleting pod pod-subpath-test-projected-49t2 Jun 7 13:42:18.322: INFO: Deleting pod "pod-subpath-test-projected-49t2" in namespace "subpath-2429" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:42:18.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2429" for this suite. Jun 7 13:42:24.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:42:24.435: INFO: namespace subpath-2429 deletion completed in 6.107916649s • [SLOW TEST:37.352 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:42:24.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2784 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 7 13:42:24.641: INFO: Found 0 stateful pods, waiting for 3 Jun 7 13:42:34.645: INFO: Found 2 stateful pods, waiting for 3 Jun 7 13:42:44.647: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:42:44.647: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:42:44.647: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 7 13:42:44.674: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 7 13:42:54.726: INFO: Updating stateful set ss2 Jun 7 13:42:54.839: INFO: Waiting for Pod statefulset-2784/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 7 13:43:05.786: INFO: Found 2 stateful pods, waiting for 3 Jun 7 13:43:15.828: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:43:15.828: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:43:15.828: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 7 13:43:15.853: INFO: Updating stateful set ss2 Jun 7 13:43:16.020: INFO: Waiting for Pod statefulset-2784/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:43:26.047: INFO: Updating stateful set ss2 Jun 7 13:43:26.173: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update Jun 7 13:43:26.173: INFO: Waiting for Pod statefulset-2784/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:43:36.180: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update Jun 7 13:43:36.180: INFO: Waiting for Pod statefulset-2784/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:43:46.180: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 7 13:43:56.180: INFO: Deleting all statefulset in ns statefulset-2784 Jun 7 13:43:56.183: INFO: Scaling statefulset ss2 to 0 Jun 7 13:44:26.229: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:44:26.232: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:44:26.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2784" for this suite. Jun 7 13:44:34.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:44:34.407: INFO: namespace statefulset-2784 deletion completed in 8.130509464s • [SLOW TEST:129.971 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:44:34.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4754 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4754 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4754 Jun 7 13:44:34.642: INFO: Found 0 stateful pods, waiting for 1 Jun 7 13:44:44.647: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 7 13:44:44.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:44:44.905: INFO: stderr: "I0607 13:44:44.776846 893 log.go:172] (0xc0009d2420) (0xc00090a820) Create stream\nI0607 13:44:44.776921 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream added, broadcasting: 1\nI0607 13:44:44.778936 893 log.go:172] (0xc0009d2420) Reply frame received for 1\nI0607 13:44:44.778973 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Create stream\nI0607 13:44:44.778980 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Stream added, broadcasting: 3\nI0607 13:44:44.779827 893 log.go:172] (0xc0009d2420) Reply frame received for 3\nI0607 13:44:44.779866 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Create stream\nI0607 13:44:44.779881 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Stream added, broadcasting: 5\nI0607 13:44:44.780547 893 log.go:172] (0xc0009d2420) Reply frame received for 5\nI0607 13:44:44.845382 893 log.go:172] (0xc0009d2420) Data frame received for 5\nI0607 13:44:44.845412 893 log.go:172] (0xc0005ba460) (5) Data frame handling\nI0607 13:44:44.845432 893 log.go:172] (0xc0005ba460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:44:44.896151 893 log.go:172] (0xc0009d2420) Data frame received for 5\nI0607 13:44:44.896197 893 log.go:172] (0xc0005ba460) (5) Data frame handling\nI0607 13:44:44.896230 893 log.go:172] (0xc0009d2420) Data frame received for 3\nI0607 13:44:44.896262 893 log.go:172] (0xc00090a8c0) (3) Data frame handling\nI0607 13:44:44.896293 893 log.go:172] (0xc00090a8c0) (3) Data frame sent\nI0607 13:44:44.896311 893 log.go:172] (0xc0009d2420) Data frame received for 3\nI0607 13:44:44.896324 893 log.go:172] (0xc00090a8c0) (3) Data frame handling\nI0607 13:44:44.898210 893 log.go:172] (0xc0009d2420) Data frame received for 1\nI0607 13:44:44.898236 893 log.go:172] (0xc00090a820) (1) Data frame handling\nI0607 13:44:44.898258 893 log.go:172] (0xc00090a820) (1) Data frame sent\nI0607 13:44:44.898275 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream removed, broadcasting: 1\nI0607 13:44:44.898290 893 log.go:172] (0xc0009d2420) Go away received\nI0607 13:44:44.898635 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream removed, broadcasting: 1\nI0607 13:44:44.898659 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Stream removed, broadcasting: 3\nI0607 13:44:44.898671 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Stream removed, broadcasting: 5\n" Jun 7 13:44:44.905: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:44:44.905: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:44:44.909: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 7 13:44:54.913: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:44:54.913: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:44:54.949: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:44:54.949: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:44:54.949: INFO: Jun 7 13:44:54.949: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 7 13:44:56.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971948891s Jun 7 13:44:57.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.399548278s Jun 7 13:44:58.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.394731839s Jun 7 13:44:59.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.389624509s Jun 7 13:45:00.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.34253657s Jun 7 13:45:01.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.331038032s Jun 7 13:45:02.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.295573123s Jun 7 13:45:03.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.291074837s Jun 7 13:45:04.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 286.806801ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4754 Jun 7 13:45:05.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:05.889: INFO: stderr: "I0607 13:45:05.801515 915 log.go:172] (0xc0008d8420) (0xc000360820) Create stream\nI0607 13:45:05.801573 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream added, broadcasting: 1\nI0607 13:45:05.803424 915 log.go:172] (0xc0008d8420) Reply frame received for 1\nI0607 13:45:05.803474 915 log.go:172] (0xc0008d8420) (0xc000960000) Create stream\nI0607 13:45:05.803489 915 log.go:172] (0xc0008d8420) (0xc000960000) Stream added, broadcasting: 3\nI0607 13:45:05.804211 915 log.go:172] (0xc0008d8420) Reply frame received for 3\nI0607 13:45:05.804246 915 log.go:172] (0xc0008d8420) (0xc000784000) Create stream\nI0607 13:45:05.804256 915 log.go:172] (0xc0008d8420) (0xc000784000) Stream added, broadcasting: 5\nI0607 13:45:05.805275 915 log.go:172] (0xc0008d8420) Reply frame received for 5\nI0607 13:45:05.877085 915 log.go:172] (0xc0008d8420) Data frame received for 5\nI0607 13:45:05.877286 915 log.go:172] (0xc000784000) (5) Data frame handling\nI0607 13:45:05.877308 915 log.go:172] (0xc000784000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:45:05.879969 915 log.go:172] (0xc0008d8420) Data frame received for 3\nI0607 13:45:05.879998 915 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:45:05.880015 915 log.go:172] (0xc000960000) (3) Data frame sent\nI0607 13:45:05.880181 915 log.go:172] (0xc0008d8420) Data frame received for 3\nI0607 13:45:05.880198 915 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:45:05.880412 915 log.go:172] (0xc0008d8420) Data frame received for 5\nI0607 13:45:05.880440 915 log.go:172] (0xc000784000) (5) Data frame handling\nI0607 13:45:05.881751 915 log.go:172] (0xc0008d8420) Data frame received for 1\nI0607 13:45:05.881766 915 log.go:172] (0xc000360820) (1) Data frame handling\nI0607 13:45:05.881785 915 log.go:172] (0xc000360820) (1) Data frame sent\nI0607 13:45:05.881802 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream removed, broadcasting: 1\nI0607 13:45:05.881817 915 log.go:172] (0xc0008d8420) Go away received\nI0607 13:45:05.882643 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream removed, broadcasting: 1\nI0607 13:45:05.882699 915 log.go:172] (0xc0008d8420) (0xc000960000) Stream removed, broadcasting: 3\nI0607 13:45:05.882721 915 log.go:172] (0xc0008d8420) (0xc000784000) Stream removed, broadcasting: 5\n" Jun 7 13:45:05.889: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:45:05.889: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:45:05.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:06.159: INFO: stderr: "I0607 13:45:06.093550 939 log.go:172] (0xc000842370) (0xc0001f8960) Create stream\nI0607 13:45:06.093639 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream added, broadcasting: 1\nI0607 13:45:06.095515 939 log.go:172] (0xc000842370) Reply frame received for 1\nI0607 13:45:06.095560 939 log.go:172] (0xc000842370) (0xc0006e0000) Create stream\nI0607 13:45:06.095593 939 log.go:172] (0xc000842370) (0xc0006e0000) Stream added, broadcasting: 3\nI0607 13:45:06.096496 939 log.go:172] (0xc000842370) Reply frame received for 3\nI0607 13:45:06.096531 939 log.go:172] (0xc000842370) (0xc0001f8a00) Create stream\nI0607 13:45:06.096543 939 log.go:172] (0xc000842370) (0xc0001f8a00) Stream added, broadcasting: 5\nI0607 13:45:06.097465 939 log.go:172] (0xc000842370) Reply frame received for 5\nI0607 13:45:06.152326 939 log.go:172] (0xc000842370) Data frame received for 3\nI0607 13:45:06.152359 939 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0607 13:45:06.152369 939 log.go:172] (0xc0006e0000) (3) Data frame sent\nI0607 13:45:06.152378 939 log.go:172] (0xc000842370) Data frame received for 3\nI0607 13:45:06.152389 939 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0607 13:45:06.152411 939 log.go:172] (0xc000842370) Data frame received for 5\nI0607 13:45:06.152421 939 log.go:172] (0xc0001f8a00) (5) Data frame handling\nI0607 13:45:06.152432 939 log.go:172] (0xc0001f8a00) (5) Data frame sent\nI0607 13:45:06.152453 939 log.go:172] (0xc000842370) Data frame received for 5\nI0607 13:45:06.152462 939 log.go:172] (0xc0001f8a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 13:45:06.153670 939 log.go:172] (0xc000842370) Data frame received for 1\nI0607 13:45:06.153690 939 log.go:172] (0xc0001f8960) (1) Data frame handling\nI0607 13:45:06.153719 939 log.go:172] (0xc0001f8960) (1) Data frame sent\nI0607 13:45:06.153787 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream removed, broadcasting: 1\nI0607 13:45:06.153814 939 log.go:172] (0xc000842370) Go away received\nI0607 13:45:06.154347 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream removed, broadcasting: 1\nI0607 13:45:06.154376 939 log.go:172] (0xc000842370) (0xc0006e0000) Stream removed, broadcasting: 3\nI0607 13:45:06.154389 939 log.go:172] (0xc000842370) (0xc0001f8a00) Stream removed, broadcasting: 5\n" Jun 7 13:45:06.159: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:45:06.159: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:45:06.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:06.346: INFO: stderr: "I0607 13:45:06.276949 954 log.go:172] (0xc0009ca420) (0xc00010c820) Create stream\nI0607 13:45:06.276999 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream added, broadcasting: 1\nI0607 13:45:06.279363 954 log.go:172] (0xc0009ca420) Reply frame received for 1\nI0607 13:45:06.279420 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Create stream\nI0607 13:45:06.279445 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Stream added, broadcasting: 3\nI0607 13:45:06.280564 954 log.go:172] (0xc0009ca420) Reply frame received for 3\nI0607 13:45:06.280609 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Create stream\nI0607 13:45:06.280627 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Stream added, broadcasting: 5\nI0607 13:45:06.281764 954 log.go:172] (0xc0009ca420) Reply frame received for 5\nI0607 13:45:06.337780 954 log.go:172] (0xc0009ca420) Data frame received for 3\nI0607 13:45:06.337841 954 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0607 13:45:06.337866 954 log.go:172] (0xc0007e4000) (3) Data frame sent\nI0607 13:45:06.337896 954 log.go:172] (0xc0009ca420) Data frame received for 5\nI0607 13:45:06.337917 954 log.go:172] (0xc00010c8c0) (5) Data frame handling\nI0607 13:45:06.337931 954 log.go:172] (0xc00010c8c0) (5) Data frame sent\nI0607 13:45:06.337943 954 log.go:172] (0xc0009ca420) Data frame received for 5\nI0607 13:45:06.337950 954 log.go:172] (0xc00010c8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 13:45:06.337979 954 log.go:172] (0xc0009ca420) Data frame received for 3\nI0607 13:45:06.337990 954 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0607 13:45:06.339897 954 log.go:172] (0xc0009ca420) Data frame received for 1\nI0607 13:45:06.339930 954 log.go:172] (0xc00010c820) (1) Data frame handling\nI0607 13:45:06.339955 954 log.go:172] (0xc00010c820) (1) Data frame sent\nI0607 13:45:06.339974 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream removed, broadcasting: 1\nI0607 13:45:06.339995 954 log.go:172] (0xc0009ca420) Go away received\nI0607 13:45:06.340326 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream removed, broadcasting: 1\nI0607 13:45:06.340352 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Stream removed, broadcasting: 3\nI0607 13:45:06.340359 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Stream removed, broadcasting: 5\n" Jun 7 13:45:06.346: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:45:06.346: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:45:06.349: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:45:06.349: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:45:06.349: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 7 13:45:06.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:45:06.558: INFO: stderr: "I0607 13:45:06.476816 976 log.go:172] (0xc000aea210) (0xc000ae4140) Create stream\nI0607 13:45:06.476892 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream added, broadcasting: 1\nI0607 13:45:06.489776 976 log.go:172] (0xc000aea210) Reply frame received for 1\nI0607 13:45:06.489828 976 log.go:172] (0xc000aea210) (0xc00040a280) Create stream\nI0607 13:45:06.489842 976 log.go:172] (0xc000aea210) (0xc00040a280) Stream added, broadcasting: 3\nI0607 13:45:06.492555 976 log.go:172] (0xc000aea210) Reply frame received for 3\nI0607 13:45:06.492580 976 log.go:172] (0xc000aea210) (0xc000ae4280) Create stream\nI0607 13:45:06.492589 976 log.go:172] (0xc000aea210) (0xc000ae4280) Stream added, broadcasting: 5\nI0607 13:45:06.494270 976 log.go:172] (0xc000aea210) Reply frame received for 5\nI0607 13:45:06.548994 976 log.go:172] (0xc000aea210) Data frame received for 5\nI0607 13:45:06.549018 976 log.go:172] (0xc000ae4280) (5) Data frame handling\nI0607 13:45:06.549026 976 log.go:172] (0xc000ae4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:06.549038 976 log.go:172] (0xc000aea210) Data frame received for 3\nI0607 13:45:06.549047 976 log.go:172] (0xc00040a280) (3) Data frame handling\nI0607 13:45:06.549056 976 log.go:172] (0xc00040a280) (3) Data frame sent\nI0607 13:45:06.549064 976 log.go:172] (0xc000aea210) Data frame received for 3\nI0607 13:45:06.549072 976 log.go:172] (0xc00040a280) (3) Data frame handling\nI0607 13:45:06.549569 976 log.go:172] (0xc000aea210) Data frame received for 5\nI0607 13:45:06.549606 976 log.go:172] (0xc000ae4280) (5) Data frame handling\nI0607 13:45:06.550974 976 log.go:172] (0xc000aea210) Data frame received for 1\nI0607 13:45:06.550995 976 log.go:172] (0xc000ae4140) (1) Data frame handling\nI0607 13:45:06.551003 976 log.go:172] (0xc000ae4140) (1) Data frame sent\nI0607 13:45:06.551015 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream removed, broadcasting: 1\nI0607 13:45:06.551025 976 log.go:172] (0xc000aea210) Go away received\nI0607 13:45:06.551314 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream removed, broadcasting: 1\nI0607 13:45:06.551327 976 log.go:172] (0xc000aea210) (0xc00040a280) Stream removed, broadcasting: 3\nI0607 13:45:06.551335 976 log.go:172] (0xc000aea210) (0xc000ae4280) Stream removed, broadcasting: 5\n" Jun 7 13:45:06.558: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:45:06.558: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:45:06.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:45:06.775: INFO: stderr: "I0607 13:45:06.678043 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Create stream\nI0607 13:45:06.678105 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream added, broadcasting: 1\nI0607 13:45:06.680858 996 log.go:172] (0xc0008c6420) Reply frame received for 1\nI0607 13:45:06.680917 996 log.go:172] (0xc0008c6420) (0xc000518780) Create stream\nI0607 13:45:06.680940 996 log.go:172] (0xc0008c6420) (0xc000518780) Stream added, broadcasting: 3\nI0607 13:45:06.682051 996 log.go:172] (0xc0008c6420) Reply frame received for 3\nI0607 13:45:06.682082 996 log.go:172] (0xc0008c6420) (0xc000832000) Create stream\nI0607 13:45:06.682094 996 log.go:172] (0xc0008c6420) (0xc000832000) Stream added, broadcasting: 5\nI0607 13:45:06.682780 996 log.go:172] (0xc0008c6420) Reply frame received for 5\nI0607 13:45:06.738830 996 log.go:172] (0xc0008c6420) Data frame received for 5\nI0607 13:45:06.738858 996 log.go:172] (0xc000832000) (5) Data frame handling\nI0607 13:45:06.738993 996 log.go:172] (0xc000832000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:06.765502 996 log.go:172] (0xc0008c6420) Data frame received for 3\nI0607 13:45:06.765550 996 log.go:172] (0xc000518780) (3) Data frame handling\nI0607 13:45:06.765585 996 log.go:172] (0xc000518780) (3) Data frame sent\nI0607 13:45:06.765604 996 log.go:172] (0xc0008c6420) Data frame received for 3\nI0607 13:45:06.765636 996 log.go:172] (0xc000518780) (3) Data frame handling\nI0607 13:45:06.765817 996 log.go:172] (0xc0008c6420) Data frame received for 5\nI0607 13:45:06.765835 996 log.go:172] (0xc000832000) (5) Data frame handling\nI0607 13:45:06.767600 996 log.go:172] (0xc0008c6420) Data frame received for 1\nI0607 13:45:06.768040 996 log.go:172] (0xc0005186e0) (1) Data frame handling\nI0607 13:45:06.768097 996 log.go:172] (0xc0005186e0) (1) Data frame sent\nI0607 13:45:06.768133 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream removed, broadcasting: 1\nI0607 13:45:06.768423 996 log.go:172] (0xc0008c6420) Go away received\nI0607 13:45:06.768714 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream removed, broadcasting: 1\nI0607 13:45:06.768746 996 log.go:172] (0xc0008c6420) (0xc000518780) Stream removed, broadcasting: 3\nI0607 13:45:06.768807 996 log.go:172] (0xc0008c6420) (0xc000832000) Stream removed, broadcasting: 5\n" Jun 7 13:45:06.775: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:45:06.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:45:06.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:45:07.024: INFO: stderr: "I0607 13:45:06.913498 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Create stream\nI0607 13:45:06.913548 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream added, broadcasting: 1\nI0607 13:45:06.915898 1019 log.go:172] (0xc0009c4370) Reply frame received for 1\nI0607 13:45:06.915934 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Create stream\nI0607 13:45:06.915950 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Stream added, broadcasting: 3\nI0607 13:45:06.916825 1019 log.go:172] (0xc0009c4370) Reply frame received for 3\nI0607 13:45:06.916856 1019 log.go:172] (0xc0009c4370) (0xc000948780) Create stream\nI0607 13:45:06.916869 1019 log.go:172] (0xc0009c4370) (0xc000948780) Stream added, broadcasting: 5\nI0607 13:45:06.918096 1019 log.go:172] (0xc0009c4370) Reply frame received for 5\nI0607 13:45:06.986060 1019 log.go:172] (0xc0009c4370) Data frame received for 5\nI0607 13:45:06.986084 1019 log.go:172] (0xc000948780) (5) Data frame handling\nI0607 13:45:06.986096 1019 log.go:172] (0xc000948780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:07.017419 1019 log.go:172] (0xc0009c4370) Data frame received for 5\nI0607 13:45:07.017469 1019 log.go:172] (0xc000948780) (5) Data frame handling\nI0607 13:45:07.017497 1019 log.go:172] (0xc0009c4370) Data frame received for 3\nI0607 13:45:07.017510 1019 log.go:172] (0xc0005ea280) (3) Data frame handling\nI0607 13:45:07.017523 1019 log.go:172] (0xc0005ea280) (3) Data frame sent\nI0607 13:45:07.017544 1019 log.go:172] (0xc0009c4370) Data frame received for 3\nI0607 13:45:07.017554 1019 log.go:172] (0xc0005ea280) (3) Data frame handling\nI0607 13:45:07.018747 1019 log.go:172] (0xc0009c4370) Data frame received for 1\nI0607 13:45:07.018826 1019 log.go:172] (0xc0009486e0) (1) Data frame handling\nI0607 13:45:07.018887 1019 log.go:172] (0xc0009486e0) (1) Data frame sent\nI0607 13:45:07.018918 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream removed, broadcasting: 1\nI0607 13:45:07.018941 1019 log.go:172] (0xc0009c4370) Go away received\nI0607 13:45:07.019282 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream removed, broadcasting: 1\nI0607 13:45:07.019300 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Stream removed, broadcasting: 3\nI0607 13:45:07.019308 1019 log.go:172] (0xc0009c4370) (0xc000948780) Stream removed, broadcasting: 5\n" Jun 7 13:45:07.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:45:07.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:45:07.024: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:45:07.054: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 7 13:45:17.130: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:45:17.130: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:45:17.130: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 7 13:45:17.172: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:17.172: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:17.172: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:17.172: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:17.172: INFO: Jun 7 13:45:17.172: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:18.396: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:18.396: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:18.396: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:18.396: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:18.396: INFO: Jun 7 13:45:18.396: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:19.442: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:19.442: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:19.442: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:19.442: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:19.442: INFO: Jun 7 13:45:19.442: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:20.447: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:20.447: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:20.447: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:20.447: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:20.447: INFO: Jun 7 13:45:20.447: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:21.534: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:21.534: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:21.534: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:21.534: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:21.534: INFO: Jun 7 13:45:21.534: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:22.539: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:22.539: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }] Jun 7 13:45:22.539: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:22.539: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:22.539: INFO: Jun 7 13:45:22.539: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 13:45:23.587: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:23.587: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:23.587: INFO: Jun 7 13:45:23.587: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 7 13:45:24.605: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:24.605: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:24.605: INFO: Jun 7 13:45:24.605: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 7 13:45:25.610: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:25.610: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:25.610: INFO: Jun 7 13:45:25.610: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 7 13:45:26.614: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 13:45:26.615: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }] Jun 7 13:45:26.615: INFO: Jun 7 13:45:26.615: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4754 Jun 7 13:45:27.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:27.741: INFO: rc: 1 Jun 7 13:45:27.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0027d5890 exit status 1 true [0xc0015320f0 0xc001532108 0xc001532120] [0xc0015320f0 0xc001532108 0xc001532120] [0xc001532100 0xc001532118] [0xba70e0 0xba70e0] 0xc0027cac60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 7 13:45:37.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:37.837: INFO: rc: 1 Jun 7 13:45:37.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002770090 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c5e060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:45:47.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:47.938: INFO: rc: 1 Jun 7 13:45:47.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002770180 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc001c5e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:45:57.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:45:58.026: INFO: rc: 1 Jun 7 13:45:58.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027d5980 exit status 1 true [0xc001532128 0xc001532140 0xc001532158] [0xc001532128 0xc001532140 0xc001532158] [0xc001532138 0xc001532150] [0xba70e0 0xba70e0] 0xc0027caf60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:46:08.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:46:08.122: INFO: rc: 1 Jun 7 13:46:08.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ab6030 exit status 1 true [0xc000187850 0xc0001878e0 0xc000187a00] [0xc000187850 0xc0001878e0 0xc000187a00] [0xc0001878b0 0xc0001879e0] [0xba70e0 0xba70e0] 0xc002cfab40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:46:18.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:46:18.220: INFO: rc: 1 Jun 7 13:46:18.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dbad80 exit status 1 true [0xc0005e4fc0 0xc0005e5070 0xc0005e5108] [0xc0005e4fc0 0xc0005e5070 0xc0005e5108] [0xc0005e5010 0xc0005e5100] [0xba70e0 0xba70e0] 0xc002c94240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:46:28.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:46:28.312: INFO: rc: 1 Jun 7 13:46:28.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dbae40 exit status 1 true [0xc0005e5128 0xc0005e5290 0xc0005e5380] [0xc0005e5128 0xc0005e5290 0xc0005e5380] [0xc0005e5248 0xc0005e5370] [0xba70e0 0xba70e0] 0xc002c94540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:46:38.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:46:38.458: INFO: rc: 1 Jun 7 13:46:38.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ab60f0 exit status 1 true [0xc000187a20 0xc000187ad8 0xc000187d98] [0xc000187a20 0xc000187ad8 0xc000187d98] [0xc000187ac0 0xc000187d30] [0xba70e0 0xba70e0] 0xc002cfae40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:46:48.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:46:55.180: INFO: rc: 1 Jun 7 13:46:55.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ab61b0 exit status 1 true [0xc000187e10 0xc000187e70 0xc0026e2008] [0xc000187e10 0xc000187e70 0xc0026e2008] [0xc000187e68 0xc0026e2000] [0xba70e0 0xba70e0] 0xc002cfb800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:05.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:05.293: INFO: rc: 1 Jun 7 13:47:05.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dbaf30 exit status 1 true [0xc0005e5390 0xc0005e5498 0xc0005e5668] [0xc0005e5390 0xc0005e5498 0xc0005e5668] [0xc0005e5410 0xc0005e55d8] [0xba70e0 0xba70e0] 0xc002c94840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:15.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:15.391: INFO: rc: 1 Jun 7 13:47:15.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a24060 exit status 1 true [0xc000186f08 0xc0001870c0 0xc000187190] [0xc000186f08 0xc0001870c0 0xc000187190] [0xc000187028 0xc000187158] [0xba70e0 0xba70e0] 0xc00189e300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:25.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:25.527: INFO: rc: 1 Jun 7 13:47:25.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dba090 exit status 1 true [0xc00053e038 0xc0005e4410 0xc0005e4948] [0xc00053e038 0xc0005e4410 0xc0005e4948] [0xc0005e4388 0xc0005e4900] [0xba70e0 0xba70e0] 0xc0024c0ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:35.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:35.627: INFO: rc: 1 Jun 7 13:47:35.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027700c0 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c52060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:45.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:45.735: INFO: rc: 1 Jun 7 13:47:45.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b60f0 exit status 1 true [0xc0026e2000 0xc0026e2018 0xc0026e2030] [0xc0026e2000 0xc0026e2018 0xc0026e2030] [0xc0026e2010 0xc0026e2028] [0xba70e0 0xba70e0] 0xc001f3c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:47:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:47:55.862: INFO: rc: 1 Jun 7 13:47:55.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a24150 exit status 1 true [0xc0001871a0 0xc000187338 0xc0001873c0] [0xc0001871a0 0xc000187338 0xc0001873c0] [0xc0001872c0 0xc0001873a8] [0xba70e0 0xba70e0] 0xc001c5e120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:05.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:06.001: INFO: rc: 1 Jun 7 13:48:06.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a24210 exit status 1 true [0xc000187448 0xc000187568 0xc0001875f0] [0xc000187448 0xc000187568 0xc0001875f0] [0xc000187538 0xc000187578] [0xba70e0 0xba70e0] 0xc001c5e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:16.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:16.092: INFO: rc: 1 Jun 7 13:48:16.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dba150 exit status 1 true [0xc0005e49b0 0xc0005e4ce0 0xc0005e4eb0] [0xc0005e49b0 0xc0005e4ce0 0xc0005e4eb0] [0xc0005e4b30 0xc0005e4ea0] [0xba70e0 0xba70e0] 0xc002c94000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:26.184: INFO: rc: 1 Jun 7 13:48:26.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dba210 exit status 1 true [0xc0005e4ee8 0xc0005e4ff0 0xc0005e50a8] [0xc0005e4ee8 0xc0005e4ff0 0xc0005e50a8] [0xc0005e4fc0 0xc0005e5070] [0xba70e0 0xba70e0] 0xc002c94300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:36.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:36.277: INFO: rc: 1 Jun 7 13:48:36.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001dba2d0 exit status 1 true [0xc0005e5100 0xc0005e5180 0xc0005e5300] [0xc0005e5100 0xc0005e5180 0xc0005e5300] [0xc0005e5128 0xc0005e5290] [0xba70e0 0xba70e0] 0xc002c94600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:46.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:46.368: INFO: rc: 1 Jun 7 13:48:46.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002770210 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc002cfa180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:48:56.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:48:56.473: INFO: rc: 1 Jun 7 13:48:56.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b6270 exit status 1 true [0xc0026e2038 0xc0026e2050 0xc0026e2068] [0xc0026e2038 0xc0026e2050 0xc0026e2068] [0xc0026e2048 0xc0026e2060] [0xba70e0 0xba70e0] 0xc0027ca3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:06.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:06.678: INFO: rc: 1 Jun 7 13:49:06.678: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b6360 exit status 1 true [0xc0026e2078 0xc0026e20a8 0xc0026e20c8] [0xc0026e2078 0xc0026e20a8 0xc0026e20c8] [0xc0026e20a0 0xc0026e20b8] [0xba70e0 0xba70e0] 0xc0027caa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:16.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:16.777: INFO: rc: 1 Jun 7 13:49:16.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a24300 exit status 1 true [0xc0001876f0 0xc000187710 0xc000187830] [0xc0001876f0 0xc000187710 0xc000187830] [0xc000187700 0xc0001877d8] [0xba70e0 0xba70e0] 0xc001c5f380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:26.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:26.887: INFO: rc: 1 Jun 7 13:49:26.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b6090 exit status 1 true [0xc00053e038 0xc0026e2010 0xc0026e2028] [0xc00053e038 0xc0026e2010 0xc0026e2028] [0xc0026e2008 0xc0026e2020] [0xba70e0 0xba70e0] 0xc001f3cde0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:36.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:36.988: INFO: rc: 1 Jun 7 13:49:36.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027700f0 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c532c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:46.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:47.089: INFO: rc: 1 Jun 7 13:49:47.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a240f0 exit status 1 true [0xc000186000 0xc000187028 0xc000187158] [0xc000186000 0xc000187028 0xc000187158] [0xc000186fc8 0xc000187108] [0xba70e0 0xba70e0] 0xc0024c0000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:49:57.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:49:57.179: INFO: rc: 1 Jun 7 13:49:57.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027701e0 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc00189f020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:50:07.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:50:07.272: INFO: rc: 1 Jun 7 13:50:07.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002770300 exit status 1 true [0xc002bd40f0 0xc002bd4118 0xc002bd4158] [0xc002bd40f0 0xc002bd4118 0xc002bd4158] [0xc002bd4100 0xc002bd4150] [0xba70e0 0xba70e0] 0xc0027ca000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:50:17.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:50:17.361: INFO: rc: 1 Jun 7 13:50:17.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b61b0 exit status 1 true [0xc0026e2030 0xc0026e2048 0xc0026e2060] [0xc0026e2030 0xc0026e2048 0xc0026e2060] [0xc0026e2040 0xc0026e2058] [0xba70e0 0xba70e0] 0xc002cfa420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:50:27.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:50:27.443: INFO: rc: 1 Jun 7 13:50:27.444: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028b62d0 exit status 1 true [0xc0026e2068 0xc0026e20a0 0xc0026e20b8] [0xc0026e2068 0xc0026e20a0 0xc0026e20b8] [0xc0026e2090 0xc0026e20b0] [0xba70e0 0xba70e0] 0xc002cfac00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 7 13:50:37.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:50:37.536: INFO: rc: 1 Jun 7 13:50:37.536: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Jun 7 13:50:37.536: INFO: Scaling statefulset ss to 0 Jun 7 13:50:37.543: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 7 13:50:37.544: INFO: Deleting all statefulset in ns statefulset-4754 Jun 7 13:50:37.546: INFO: Scaling statefulset ss to 0 Jun 7 13:50:37.553: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:50:37.554: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:50:37.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4754" for this suite. Jun 7 13:50:45.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:50:45.767: INFO: namespace statefulset-4754 deletion completed in 8.096389069s • [SLOW TEST:371.360 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:50:45.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 13:50:45.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47" in namespace "projected-4519" to be "success or failure" Jun 7 13:50:45.998: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 51.773275ms Jun 7 13:50:48.091: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144499633s Jun 7 13:50:50.304: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357019182s Jun 7 13:50:52.308: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.361053776s STEP: Saw pod success Jun 7 13:50:52.308: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47" satisfied condition "success or failure" Jun 7 13:50:52.310: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 container client-container: STEP: delete the pod Jun 7 13:50:52.375: INFO: Waiting for pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 to disappear Jun 7 13:50:52.391: INFO: Pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:50:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4519" for this suite. Jun 7 13:50:58.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:50:58.667: INFO: namespace projected-4519 deletion completed in 6.273116403s • [SLOW TEST:12.900 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:50:58.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 13:50:58.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3677' Jun 7 13:50:58.916: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 13:50:58.916: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 7 13:51:01.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3677' Jun 7 13:51:01.257: INFO: stderr: "" Jun 7 13:51:01.257: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:51:01.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3677" for this suite. Jun 7 13:51:23.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:51:23.751: INFO: namespace kubectl-3677 deletion completed in 22.350430702s • [SLOW TEST:25.083 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:51:23.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-5f77e3be-ded6-4781-b657-76a0b1cb8347 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:51:23.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7505" for this suite. Jun 7 13:51:30.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:51:30.090: INFO: namespace secrets-7505 deletion completed in 6.118233779s • [SLOW TEST:6.339 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:51:30.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0607 13:51:42.187462 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 13:51:42.187: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:51:42.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1549" for this suite. Jun 7 13:51:54.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:51:54.353: INFO: namespace gc-1549 deletion completed in 12.103574221s • [SLOW TEST:24.262 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:51:54.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 7 13:51:54.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1115' Jun 7 13:51:54.924: INFO: stderr: "" Jun 7 13:51:54.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 13:51:54.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:51:55.105: INFO: stderr: "" Jun 7 13:51:55.105: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv " Jun 7 13:51:55.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:51:55.197: INFO: stderr: "" Jun 7 13:51:55.197: INFO: stdout: "" Jun 7 13:51:55.197: INFO: update-demo-nautilus-r2fzp is created but not running Jun 7 13:52:00.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:00.305: INFO: stderr: "" Jun 7 13:52:00.305: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv " Jun 7 13:52:00.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:00.406: INFO: stderr: "" Jun 7 13:52:00.406: INFO: stdout: "" Jun 7 13:52:00.406: INFO: update-demo-nautilus-r2fzp is created but not running Jun 7 13:52:05.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:05.578: INFO: stderr: "" Jun 7 13:52:05.578: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv " Jun 7 13:52:05.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:05.674: INFO: stderr: "" Jun 7 13:52:05.674: INFO: stdout: "true" Jun 7 13:52:05.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:05.769: INFO: stderr: "" Jun 7 13:52:05.769: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:05.769: INFO: validating pod update-demo-nautilus-r2fzp Jun 7 13:52:05.788: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:05.788: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:05.788: INFO: update-demo-nautilus-r2fzp is verified up and running Jun 7 13:52:05.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:05.923: INFO: stderr: "" Jun 7 13:52:05.923: INFO: stdout: "true" Jun 7 13:52:05.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:06.020: INFO: stderr: "" Jun 7 13:52:06.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:06.020: INFO: validating pod update-demo-nautilus-v7jdv Jun 7 13:52:06.052: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:06.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:06.053: INFO: update-demo-nautilus-v7jdv is verified up and running STEP: scaling down the replication controller Jun 7 13:52:06.056: INFO: scanned /root for discovery docs: Jun 7 13:52:06.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1115' Jun 7 13:52:07.294: INFO: stderr: "" Jun 7 13:52:07.294: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 13:52:07.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:07.397: INFO: stderr: "" Jun 7 13:52:07.397: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 7 13:52:12.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:12.492: INFO: stderr: "" Jun 7 13:52:12.492: INFO: stdout: "update-demo-nautilus-v7jdv " Jun 7 13:52:12.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:12.579: INFO: stderr: "" Jun 7 13:52:12.579: INFO: stdout: "true" Jun 7 13:52:12.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:12.676: INFO: stderr: "" Jun 7 13:52:12.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:12.676: INFO: validating pod update-demo-nautilus-v7jdv Jun 7 13:52:12.679: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:12.679: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:12.679: INFO: update-demo-nautilus-v7jdv is verified up and running STEP: scaling up the replication controller Jun 7 13:52:12.680: INFO: scanned /root for discovery docs: Jun 7 13:52:12.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1115' Jun 7 13:52:13.820: INFO: stderr: "" Jun 7 13:52:13.820: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 13:52:13.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:13.916: INFO: stderr: "" Jun 7 13:52:13.916: INFO: stdout: "update-demo-nautilus-v7jdv update-demo-nautilus-xx2wf " Jun 7 13:52:13.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:14.023: INFO: stderr: "" Jun 7 13:52:14.024: INFO: stdout: "true" Jun 7 13:52:14.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:14.118: INFO: stderr: "" Jun 7 13:52:14.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:14.118: INFO: validating pod update-demo-nautilus-v7jdv Jun 7 13:52:14.121: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:14.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:14.121: INFO: update-demo-nautilus-v7jdv is verified up and running Jun 7 13:52:14.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:14.205: INFO: stderr: "" Jun 7 13:52:14.205: INFO: stdout: "" Jun 7 13:52:14.205: INFO: update-demo-nautilus-xx2wf is created but not running Jun 7 13:52:19.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115' Jun 7 13:52:19.299: INFO: stderr: "" Jun 7 13:52:19.299: INFO: stdout: "update-demo-nautilus-v7jdv update-demo-nautilus-xx2wf " Jun 7 13:52:19.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:19.388: INFO: stderr: "" Jun 7 13:52:19.388: INFO: stdout: "true" Jun 7 13:52:19.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:19.509: INFO: stderr: "" Jun 7 13:52:19.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:19.510: INFO: validating pod update-demo-nautilus-v7jdv Jun 7 13:52:19.513: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:19.513: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:19.513: INFO: update-demo-nautilus-v7jdv is verified up and running Jun 7 13:52:19.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:19.604: INFO: stderr: "" Jun 7 13:52:19.604: INFO: stdout: "true" Jun 7 13:52:19.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115' Jun 7 13:52:19.701: INFO: stderr: "" Jun 7 13:52:19.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 13:52:19.701: INFO: validating pod update-demo-nautilus-xx2wf Jun 7 13:52:19.705: INFO: got data: { "image": "nautilus.jpg" } Jun 7 13:52:19.705: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 13:52:19.705: INFO: update-demo-nautilus-xx2wf is verified up and running STEP: using delete to clean up resources Jun 7 13:52:19.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1115' Jun 7 13:52:19.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 13:52:19.874: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 7 13:52:19.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1115' Jun 7 13:52:19.971: INFO: stderr: "No resources found.\n" Jun 7 13:52:19.971: INFO: stdout: "" Jun 7 13:52:19.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1115 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 13:52:20.097: INFO: stderr: "" Jun 7 13:52:20.097: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:52:20.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1115" for this suite. Jun 7 13:52:44.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:52:44.416: INFO: namespace kubectl-1115 deletion completed in 24.305170839s • [SLOW TEST:50.063 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:52:44.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4212/configmap-test-f807c707-1fb0-4213-a128-a9a3450e6603 STEP: Creating a pod to test consume configMaps Jun 7 13:52:44.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca" in namespace "configmap-4212" to be "success or failure" Jun 7 13:52:44.696: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 60.945127ms Jun 7 13:52:46.701: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065268184s Jun 7 13:52:48.705: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06934847s Jun 7 13:52:50.748: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112447103s Jun 7 13:52:52.752: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116214162s STEP: Saw pod success Jun 7 13:52:52.752: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca" satisfied condition "success or failure" Jun 7 13:52:52.754: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca container env-test: STEP: delete the pod Jun 7 13:52:52.792: INFO: Waiting for pod pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca to disappear Jun 7 13:52:52.880: INFO: Pod pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:52:52.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4212" for this suite. Jun 7 13:52:58.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:52:58.999: INFO: namespace configmap-4212 deletion completed in 6.114803822s • [SLOW TEST:14.583 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:52:58.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 13:52:59.072: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 7 13:53:01.195: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:53:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6130" for this suite. Jun 7 13:53:10.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:53:10.646: INFO: namespace replication-controller-6130 deletion completed in 8.227502455s • [SLOW TEST:11.647 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:53:10.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 7 13:53:10.875: INFO: Waiting up to 5m0s for pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c" in namespace "var-expansion-4944" to be "success or failure" Jun 7 13:53:10.899: INFO: Pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.934259ms Jun 7 13:53:12.903: INFO: Pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027483958s Jun 7 13:53:15.343: INFO: Pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c": Phase="Running", Reason="", readiness=true. Elapsed: 4.467749603s Jun 7 13:53:17.348: INFO: Pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.472270007s STEP: Saw pod success Jun 7 13:53:17.348: INFO: Pod "var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c" satisfied condition "success or failure" Jun 7 13:53:17.351: INFO: Trying to get logs from node iruya-worker pod var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c container dapi-container: STEP: delete the pod Jun 7 13:53:17.463: INFO: Waiting for pod var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c to disappear Jun 7 13:53:17.473: INFO: Pod var-expansion-98efe8c1-5be7-4b61-affe-93f5873cb33c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:53:17.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4944" for this suite. Jun 7 13:53:23.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:53:23.617: INFO: namespace var-expansion-4944 deletion completed in 6.141937078s • [SLOW TEST:12.971 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:53:23.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0e4fec99-c3df-4a8c-8bc2-eefda09cc71c STEP: Creating a pod to test consume configMaps Jun 7 13:53:23.820: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50" in namespace "projected-4133" to be "success or failure" Jun 7 13:53:23.923: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50": Phase="Pending", Reason="", readiness=false. Elapsed: 102.899717ms Jun 7 13:53:25.927: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10709255s Jun 7 13:53:28.008: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187795107s Jun 7 13:53:30.253: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433425332s Jun 7 13:53:32.258: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.438531032s STEP: Saw pod success Jun 7 13:53:32.258: INFO: Pod "pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50" satisfied condition "success or failure" Jun 7 13:53:32.262: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50 container projected-configmap-volume-test: STEP: delete the pod Jun 7 13:53:32.494: INFO: Waiting for pod pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50 to disappear Jun 7 13:53:32.606: INFO: Pod pod-projected-configmaps-3f67fc94-cb70-4d32-b4dd-78985b652e50 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:53:32.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4133" for this suite. Jun 7 13:53:38.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:53:38.738: INFO: namespace projected-4133 deletion completed in 6.127985701s • [SLOW TEST:15.120 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:53:38.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0607 13:53:40.038906 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 13:53:40.038: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:53:40.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6640" for this suite. Jun 7 13:53:46.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:53:46.160: INFO: namespace gc-6640 deletion completed in 6.118196143s • [SLOW TEST:7.421 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:53:46.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 7 13:53:46.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5616' Jun 7 13:53:46.612: INFO: stderr: "" Jun 7 13:53:46.612: INFO: stdout: "pod/pause created\n" Jun 7 13:53:46.612: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 7 13:53:46.612: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5616" to be "running and ready" Jun 7 13:53:46.665: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 53.276313ms Jun 7 13:53:48.878: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265768256s Jun 7 13:53:50.882: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270319852s Jun 7 13:53:52.887: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.274525965s Jun 7 13:53:52.887: INFO: Pod "pause" satisfied condition "running and ready" Jun 7 13:53:52.887: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 7 13:53:52.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5616' Jun 7 13:53:52.981: INFO: stderr: "" Jun 7 13:53:52.981: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 7 13:53:52.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5616' Jun 7 13:53:53.273: INFO: stderr: "" Jun 7 13:53:53.273: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 7 13:53:53.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5616' Jun 7 13:53:53.374: INFO: stderr: "" Jun 7 13:53:53.374: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 7 13:53:53.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5616' Jun 7 13:53:53.517: INFO: stderr: "" Jun 7 13:53:53.517: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 7 13:53:53.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5616' Jun 7 13:53:53.745: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 13:53:53.745: INFO: stdout: "pod \"pause\" force deleted\n" Jun 7 13:53:53.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5616' Jun 7 13:53:53.865: INFO: stderr: "No resources found.\n" Jun 7 13:53:53.865: INFO: stdout: "" Jun 7 13:53:53.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5616 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 13:53:53.954: INFO: stderr: "" Jun 7 13:53:53.954: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:53:53.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5616" for this suite. Jun 7 13:54:00.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:54:00.206: INFO: namespace kubectl-5616 deletion completed in 6.248879676s • [SLOW TEST:14.046 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:54:00.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0607 13:54:31.100779 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 13:54:31.100: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:54:31.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4808" for this suite. Jun 7 13:54:39.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:54:39.234: INFO: namespace gc-4808 deletion completed in 8.131269437s • [SLOW TEST:39.027 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:54:39.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 7 13:54:43.945: INFO: Successfully updated pod "pod-update-e7405487-199b-4cbf-a0a7-91fac529544c" STEP: verifying the updated pod is in kubernetes Jun 7 13:54:43.981: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:54:43.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-20" for this suite. Jun 7 13:55:06.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:55:06.104: INFO: namespace pods-20 deletion completed in 22.119536007s • [SLOW TEST:26.869 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:55:06.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-95b90b3d-fc1e-4029-b07d-aec2ec4ee3b9 in namespace container-probe-8492 Jun 7 13:55:12.258: INFO: Started pod busybox-95b90b3d-fc1e-4029-b07d-aec2ec4ee3b9 in namespace container-probe-8492 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 13:55:12.261: INFO: Initial restart count of pod busybox-95b90b3d-fc1e-4029-b07d-aec2ec4ee3b9 is 0 Jun 7 13:56:08.378: INFO: Restart count of pod container-probe-8492/busybox-95b90b3d-fc1e-4029-b07d-aec2ec4ee3b9 is now 1 (56.11732196s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:56:08.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8492" for this suite. Jun 7 13:56:14.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:56:14.630: INFO: namespace container-probe-8492 deletion completed in 6.159767125s • [SLOW TEST:68.526 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:56:14.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 7 13:56:14.847: INFO: Waiting up to 5m0s for pod "pod-0403da63-53cb-4011-9314-140cae948b32" in namespace "emptydir-5878" to be "success or failure" Jun 7 13:56:14.849: INFO: Pod "pod-0403da63-53cb-4011-9314-140cae948b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203519ms Jun 7 13:56:16.879: INFO: Pod "pod-0403da63-53cb-4011-9314-140cae948b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032074623s Jun 7 13:56:18.883: INFO: Pod "pod-0403da63-53cb-4011-9314-140cae948b32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036174909s Jun 7 13:56:20.887: INFO: Pod "pod-0403da63-53cb-4011-9314-140cae948b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040447802s STEP: Saw pod success Jun 7 13:56:20.887: INFO: Pod "pod-0403da63-53cb-4011-9314-140cae948b32" satisfied condition "success or failure" Jun 7 13:56:20.889: INFO: Trying to get logs from node iruya-worker2 pod pod-0403da63-53cb-4011-9314-140cae948b32 container test-container: STEP: delete the pod Jun 7 13:56:20.922: INFO: Waiting for pod pod-0403da63-53cb-4011-9314-140cae948b32 to disappear Jun 7 13:56:20.956: INFO: Pod pod-0403da63-53cb-4011-9314-140cae948b32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:56:20.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5878" for this suite. Jun 7 13:56:27.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:56:27.123: INFO: namespace emptydir-5878 deletion completed in 6.163321971s • [SLOW TEST:12.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:56:27.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8328 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 7 13:56:27.247: INFO: Found 0 stateful pods, waiting for 3 Jun 7 13:56:37.252: INFO: Found 2 stateful pods, waiting for 3 Jun 7 13:56:47.252: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:56:47.252: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:56:47.252: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 7 13:56:47.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8328 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:56:47.632: INFO: stderr: "I0607 13:56:47.384442 2435 log.go:172] (0xc0009260b0) (0xc0008c4640) Create stream\nI0607 13:56:47.384502 2435 log.go:172] (0xc0009260b0) (0xc0008c4640) Stream added, broadcasting: 1\nI0607 13:56:47.387122 2435 log.go:172] (0xc0009260b0) Reply frame received for 1\nI0607 13:56:47.387158 2435 log.go:172] (0xc0009260b0) (0xc000960000) Create stream\nI0607 13:56:47.387169 2435 log.go:172] (0xc0009260b0) (0xc000960000) Stream added, broadcasting: 3\nI0607 13:56:47.387895 2435 log.go:172] (0xc0009260b0) Reply frame received for 3\nI0607 13:56:47.387923 2435 log.go:172] (0xc0009260b0) (0xc0009600a0) Create stream\nI0607 13:56:47.387933 2435 log.go:172] (0xc0009260b0) (0xc0009600a0) Stream added, broadcasting: 5\nI0607 13:56:47.388659 2435 log.go:172] (0xc0009260b0) Reply frame received for 5\nI0607 13:56:47.587193 2435 log.go:172] (0xc0009260b0) Data frame received for 5\nI0607 13:56:47.587234 2435 log.go:172] (0xc0009600a0) (5) Data frame handling\nI0607 13:56:47.587260 2435 log.go:172] (0xc0009600a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:56:47.623030 2435 log.go:172] (0xc0009260b0) Data frame received for 5\nI0607 13:56:47.623051 2435 log.go:172] (0xc0009600a0) (5) Data frame handling\nI0607 13:56:47.623099 2435 log.go:172] (0xc0009260b0) Data frame received for 3\nI0607 13:56:47.623134 2435 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:56:47.623236 2435 log.go:172] (0xc000960000) (3) Data frame sent\nI0607 13:56:47.623388 2435 log.go:172] (0xc0009260b0) Data frame received for 3\nI0607 13:56:47.623400 2435 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:56:47.625523 2435 log.go:172] (0xc0009260b0) Data frame received for 1\nI0607 13:56:47.625534 2435 log.go:172] (0xc0008c4640) (1) Data frame handling\nI0607 13:56:47.625553 2435 log.go:172] (0xc0008c4640) (1) Data frame sent\nI0607 13:56:47.625603 2435 log.go:172] (0xc0009260b0) (0xc0008c4640) Stream removed, broadcasting: 1\nI0607 13:56:47.625686 2435 log.go:172] (0xc0009260b0) Go away received\nI0607 13:56:47.625847 2435 log.go:172] (0xc0009260b0) (0xc0008c4640) Stream removed, broadcasting: 1\nI0607 13:56:47.625864 2435 log.go:172] (0xc0009260b0) (0xc000960000) Stream removed, broadcasting: 3\nI0607 13:56:47.625871 2435 log.go:172] (0xc0009260b0) (0xc0009600a0) Stream removed, broadcasting: 5\n" Jun 7 13:56:47.632: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:56:47.632: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 7 13:56:57.661: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 7 13:57:07.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8328 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:57:12.114: INFO: stderr: "I0607 13:57:12.021012 2456 log.go:172] (0xc00053e8f0) (0xc0006f8820) Create stream\nI0607 13:57:12.021046 2456 log.go:172] (0xc00053e8f0) (0xc0006f8820) Stream added, broadcasting: 1\nI0607 13:57:12.024094 2456 log.go:172] (0xc00053e8f0) Reply frame received for 1\nI0607 13:57:12.024147 2456 log.go:172] (0xc00053e8f0) (0xc00096e000) Create stream\nI0607 13:57:12.024164 2456 log.go:172] (0xc00053e8f0) (0xc00096e000) Stream added, broadcasting: 3\nI0607 13:57:12.025387 2456 log.go:172] (0xc00053e8f0) Reply frame received for 3\nI0607 13:57:12.025434 2456 log.go:172] (0xc00053e8f0) (0xc000a06000) Create stream\nI0607 13:57:12.025464 2456 log.go:172] (0xc00053e8f0) (0xc000a06000) Stream added, broadcasting: 5\nI0607 13:57:12.026479 2456 log.go:172] (0xc00053e8f0) Reply frame received for 5\nI0607 13:57:12.104398 2456 log.go:172] (0xc00053e8f0) Data frame received for 3\nI0607 13:57:12.104431 2456 log.go:172] (0xc00096e000) (3) Data frame handling\nI0607 13:57:12.104454 2456 log.go:172] (0xc00096e000) (3) Data frame sent\nI0607 13:57:12.104812 2456 log.go:172] (0xc00053e8f0) Data frame received for 3\nI0607 13:57:12.104842 2456 log.go:172] (0xc00096e000) (3) Data frame handling\nI0607 13:57:12.105063 2456 log.go:172] (0xc00053e8f0) Data frame received for 5\nI0607 13:57:12.105081 2456 log.go:172] (0xc000a06000) (5) Data frame handling\nI0607 13:57:12.105096 2456 log.go:172] (0xc000a06000) (5) Data frame sent\nI0607 13:57:12.105108 2456 log.go:172] (0xc00053e8f0) Data frame received for 5\nI0607 13:57:12.105311 2456 log.go:172] (0xc000a06000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:57:12.106856 2456 log.go:172] (0xc00053e8f0) Data frame received for 1\nI0607 13:57:12.106875 2456 log.go:172] (0xc0006f8820) (1) Data frame handling\nI0607 13:57:12.106889 2456 log.go:172] (0xc0006f8820) (1) Data frame sent\nI0607 13:57:12.106904 2456 log.go:172] (0xc00053e8f0) (0xc0006f8820) Stream removed, broadcasting: 1\nI0607 13:57:12.106921 2456 log.go:172] (0xc00053e8f0) Go away received\nI0607 13:57:12.107270 2456 log.go:172] (0xc00053e8f0) (0xc0006f8820) Stream removed, broadcasting: 1\nI0607 13:57:12.107287 2456 log.go:172] (0xc00053e8f0) (0xc00096e000) Stream removed, broadcasting: 3\nI0607 13:57:12.107295 2456 log.go:172] (0xc00053e8f0) (0xc000a06000) Stream removed, broadcasting: 5\n" Jun 7 13:57:12.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:57:12.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:57:22.137: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update Jun 7 13:57:22.137: INFO: Waiting for Pod statefulset-8328/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:22.137: INFO: Waiting for Pod statefulset-8328/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:22.137: INFO: Waiting for Pod statefulset-8328/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:32.146: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update Jun 7 13:57:32.146: INFO: Waiting for Pod statefulset-8328/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:32.146: INFO: Waiting for Pod statefulset-8328/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:42.145: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update Jun 7 13:57:42.145: INFO: Waiting for Pod statefulset-8328/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:57:52.145: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update Jun 7 13:57:52.145: INFO: Waiting for Pod statefulset-8328/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 7 13:58:02.144: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update Jun 7 13:58:02.144: INFO: Waiting for Pod statefulset-8328/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 7 13:58:12.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8328 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 7 13:58:12.463: INFO: stderr: "I0607 13:58:12.298704 2488 log.go:172] (0xc0009ea420) (0xc0002ba820) Create stream\nI0607 13:58:12.298779 2488 log.go:172] (0xc0009ea420) (0xc0002ba820) Stream added, broadcasting: 1\nI0607 13:58:12.301393 2488 log.go:172] (0xc0009ea420) Reply frame received for 1\nI0607 13:58:12.301427 2488 log.go:172] (0xc0009ea420) (0xc00069e280) Create stream\nI0607 13:58:12.301435 2488 log.go:172] (0xc0009ea420) (0xc00069e280) Stream added, broadcasting: 3\nI0607 13:58:12.302623 2488 log.go:172] (0xc0009ea420) Reply frame received for 3\nI0607 13:58:12.303137 2488 log.go:172] (0xc0009ea420) (0xc0008b2000) Create stream\nI0607 13:58:12.303178 2488 log.go:172] (0xc0009ea420) (0xc0008b2000) Stream added, broadcasting: 5\nI0607 13:58:12.304642 2488 log.go:172] (0xc0009ea420) Reply frame received for 5\nI0607 13:58:12.410299 2488 log.go:172] (0xc0009ea420) Data frame received for 5\nI0607 13:58:12.410343 2488 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0607 13:58:12.410369 2488 log.go:172] (0xc0008b2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:58:12.454922 2488 log.go:172] (0xc0009ea420) Data frame received for 5\nI0607 13:58:12.454946 2488 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0607 13:58:12.454991 2488 log.go:172] (0xc0009ea420) Data frame received for 3\nI0607 13:58:12.455026 2488 log.go:172] (0xc00069e280) (3) Data frame handling\nI0607 13:58:12.455041 2488 log.go:172] (0xc00069e280) (3) Data frame sent\nI0607 13:58:12.455047 2488 log.go:172] (0xc0009ea420) Data frame received for 3\nI0607 13:58:12.455050 2488 log.go:172] (0xc00069e280) (3) Data frame handling\nI0607 13:58:12.456465 2488 log.go:172] (0xc0009ea420) Data frame received for 1\nI0607 13:58:12.456481 2488 log.go:172] (0xc0002ba820) (1) Data frame handling\nI0607 13:58:12.456493 2488 log.go:172] (0xc0002ba820) (1) Data frame sent\nI0607 13:58:12.456601 2488 log.go:172] (0xc0009ea420) (0xc0002ba820) Stream removed, broadcasting: 1\nI0607 13:58:12.456651 2488 log.go:172] (0xc0009ea420) Go away received\nI0607 13:58:12.456848 2488 log.go:172] (0xc0009ea420) (0xc0002ba820) Stream removed, broadcasting: 1\nI0607 13:58:12.456861 2488 log.go:172] (0xc0009ea420) (0xc00069e280) Stream removed, broadcasting: 3\nI0607 13:58:12.456868 2488 log.go:172] (0xc0009ea420) (0xc0008b2000) Stream removed, broadcasting: 5\n" Jun 7 13:58:12.463: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 7 13:58:12.463: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 7 13:58:22.495: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 7 13:58:32.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8328 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 7 13:58:32.818: INFO: stderr: "I0607 13:58:32.718310 2510 log.go:172] (0xc000a9e580) (0xc0005f8be0) Create stream\nI0607 13:58:32.718395 2510 log.go:172] (0xc000a9e580) (0xc0005f8be0) Stream added, broadcasting: 1\nI0607 13:58:32.721085 2510 log.go:172] (0xc000a9e580) Reply frame received for 1\nI0607 13:58:32.721284 2510 log.go:172] (0xc000a9e580) (0xc000824000) Create stream\nI0607 13:58:32.721308 2510 log.go:172] (0xc000a9e580) (0xc000824000) Stream added, broadcasting: 3\nI0607 13:58:32.722562 2510 log.go:172] (0xc000a9e580) Reply frame received for 3\nI0607 13:58:32.722603 2510 log.go:172] (0xc000a9e580) (0xc0005f8c80) Create stream\nI0607 13:58:32.722616 2510 log.go:172] (0xc000a9e580) (0xc0005f8c80) Stream added, broadcasting: 5\nI0607 13:58:32.723853 2510 log.go:172] (0xc000a9e580) Reply frame received for 5\nI0607 13:58:32.809396 2510 log.go:172] (0xc000a9e580) Data frame received for 3\nI0607 13:58:32.809441 2510 log.go:172] (0xc000824000) (3) Data frame handling\nI0607 13:58:32.809459 2510 log.go:172] (0xc000824000) (3) Data frame sent\nI0607 13:58:32.809471 2510 log.go:172] (0xc000a9e580) Data frame received for 3\nI0607 13:58:32.809481 2510 log.go:172] (0xc000824000) (3) Data frame handling\nI0607 13:58:32.809534 2510 log.go:172] (0xc000a9e580) Data frame received for 5\nI0607 13:58:32.809574 2510 log.go:172] (0xc0005f8c80) (5) Data frame handling\nI0607 13:58:32.809593 2510 log.go:172] (0xc0005f8c80) (5) Data frame sent\nI0607 13:58:32.809609 2510 log.go:172] (0xc000a9e580) Data frame received for 5\nI0607 13:58:32.809619 2510 log.go:172] (0xc0005f8c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:58:32.811254 2510 log.go:172] (0xc000a9e580) Data frame received for 1\nI0607 13:58:32.811284 2510 log.go:172] (0xc0005f8be0) (1) Data frame handling\nI0607 13:58:32.811306 2510 log.go:172] (0xc0005f8be0) (1) Data frame sent\nI0607 13:58:32.811332 2510 log.go:172] (0xc000a9e580) (0xc0005f8be0) Stream removed, broadcasting: 1\nI0607 13:58:32.811404 2510 log.go:172] (0xc000a9e580) Go away received\nI0607 13:58:32.811880 2510 log.go:172] (0xc000a9e580) (0xc0005f8be0) Stream removed, broadcasting: 1\nI0607 13:58:32.811911 2510 log.go:172] (0xc000a9e580) (0xc000824000) Stream removed, broadcasting: 3\nI0607 13:58:32.811931 2510 log.go:172] (0xc000a9e580) (0xc0005f8c80) Stream removed, broadcasting: 5\n" Jun 7 13:58:32.819: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 7 13:58:32.819: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 7 13:58:52.841: INFO: Waiting for StatefulSet statefulset-8328/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 7 13:59:02.848: INFO: Deleting all statefulset in ns statefulset-8328 Jun 7 13:59:02.850: INFO: Scaling statefulset ss2 to 0 Jun 7 13:59:22.878: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 13:59:22.880: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:59:22.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8328" for this suite. Jun 7 13:59:28.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:59:28.986: INFO: namespace statefulset-8328 deletion completed in 6.092884567s • [SLOW TEST:181.863 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:59:28.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 7 13:59:29.053: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:59:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8516" for this suite. Jun 7 13:59:42.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:59:42.692: INFO: namespace init-container-8516 deletion completed in 6.079317599s • [SLOW TEST:13.705 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:59:42.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 13:59:43.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369" in namespace "downward-api-4411" to be "success or failure" Jun 7 13:59:43.120: INFO: Pod "downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369": Phase="Pending", Reason="", readiness=false. Elapsed: 19.070243ms Jun 7 13:59:45.198: INFO: Pod "downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096393083s Jun 7 13:59:47.202: INFO: Pod "downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100526947s STEP: Saw pod success Jun 7 13:59:47.202: INFO: Pod "downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369" satisfied condition "success or failure" Jun 7 13:59:47.205: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369 container client-container: STEP: delete the pod Jun 7 13:59:47.271: INFO: Waiting for pod downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369 to disappear Jun 7 13:59:47.278: INFO: Pod downwardapi-volume-9d0fadb6-f37f-4f65-a36e-f20113747369 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:59:47.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4411" for this suite. Jun 7 13:59:53.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 13:59:53.378: INFO: namespace downward-api-4411 deletion completed in 6.096652851s • [SLOW TEST:10.687 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 13:59:53.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 13:59:59.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7871" for this suite. Jun 7 14:00:05.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:05.843: INFO: namespace namespaces-7871 deletion completed in 6.096920735s STEP: Destroying namespace "nsdeletetest-6074" for this suite. Jun 7 14:00:05.845: INFO: Namespace nsdeletetest-6074 was already deleted STEP: Destroying namespace "nsdeletetest-5025" for this suite. Jun 7 14:00:11.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:11.944: INFO: namespace nsdeletetest-5025 deletion completed in 6.098344376s • [SLOW TEST:18.564 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:11.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-150dd30b-bfcb-45f9-b29e-1ee4f88db8f2 STEP: Creating a pod to test consume secrets Jun 7 14:00:12.036: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64" in namespace "projected-1031" to be "success or failure" Jun 7 14:00:12.040: INFO: Pod "pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031424ms Jun 7 14:00:14.046: INFO: Pod "pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009389823s Jun 7 14:00:16.049: INFO: Pod "pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01318819s STEP: Saw pod success Jun 7 14:00:16.050: INFO: Pod "pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64" satisfied condition "success or failure" Jun 7 14:00:16.052: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64 container projected-secret-volume-test: STEP: delete the pod Jun 7 14:00:16.090: INFO: Waiting for pod pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64 to disappear Jun 7 14:00:16.114: INFO: Pod pod-projected-secrets-7934abed-4a7c-40ad-90dc-ffde1ec40a64 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:00:16.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1031" for this suite. Jun 7 14:00:22.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:22.208: INFO: namespace projected-1031 deletion completed in 6.090195186s • [SLOW TEST:10.264 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:22.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:00:22.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6599" for this suite. Jun 7 14:00:28.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:28.468: INFO: namespace kubelet-test-6599 deletion completed in 6.090314s • [SLOW TEST:6.260 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:28.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 14:00:28.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4408' Jun 7 14:00:28.658: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 14:00:28.658: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 7 14:00:28.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4408' Jun 7 14:00:28.793: INFO: stderr: "" Jun 7 14:00:28.793: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:00:28.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4408" for this suite. Jun 7 14:00:34.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:34.888: INFO: namespace kubectl-4408 deletion completed in 6.09109335s • [SLOW TEST:6.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:34.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:00:34.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757" in namespace "projected-1290" to be "success or failure" Jun 7 14:00:34.968: INFO: Pod "downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757": Phase="Pending", Reason="", readiness=false. Elapsed: 9.352281ms Jun 7 14:00:37.054: INFO: Pod "downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095307836s Jun 7 14:00:39.059: INFO: Pod "downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09975194s STEP: Saw pod success Jun 7 14:00:39.059: INFO: Pod "downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757" satisfied condition "success or failure" Jun 7 14:00:39.061: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757 container client-container: STEP: delete the pod Jun 7 14:00:39.106: INFO: Waiting for pod downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757 to disappear Jun 7 14:00:39.112: INFO: Pod downwardapi-volume-d6b02576-7fb6-46cc-ac4d-3721baa58757 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:00:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1290" for this suite. Jun 7 14:00:45.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:45.209: INFO: namespace projected-1290 deletion completed in 6.093838305s • [SLOW TEST:10.320 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:45.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 7 14:00:45.281: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 7 14:00:50.286: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:00:51.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5251" for this suite. Jun 7 14:00:57.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:00:57.399: INFO: namespace replication-controller-5251 deletion completed in 6.085978972s • [SLOW TEST:12.189 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:00:57.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 7 14:00:57.570: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 7 14:00:57.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:57.884: INFO: stderr: "" Jun 7 14:00:57.884: INFO: stdout: "service/redis-slave created\n" Jun 7 14:00:57.884: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 7 14:00:57.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:58.182: INFO: stderr: "" Jun 7 14:00:58.182: INFO: stdout: "service/redis-master created\n" Jun 7 14:00:58.182: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 7 14:00:58.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:58.503: INFO: stderr: "" Jun 7 14:00:58.503: INFO: stdout: "service/frontend created\n" Jun 7 14:00:58.503: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 7 14:00:58.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:58.753: INFO: stderr: "" Jun 7 14:00:58.753: INFO: stdout: "deployment.apps/frontend created\n" Jun 7 14:00:58.753: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 7 14:00:58.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:59.064: INFO: stderr: "" Jun 7 14:00:59.064: INFO: stdout: "deployment.apps/redis-master created\n" Jun 7 14:00:59.064: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 7 14:00:59.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2041' Jun 7 14:00:59.365: INFO: stderr: "" Jun 7 14:00:59.365: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 7 14:00:59.365: INFO: Waiting for all frontend pods to be Running. Jun 7 14:01:09.416: INFO: Waiting for frontend to serve content. Jun 7 14:01:09.432: INFO: Trying to add a new entry to the guestbook. Jun 7 14:01:09.459: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 7 14:01:09.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:09.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:09.658: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 7 14:01:09.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:09.816: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:09.816: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 7 14:01:09.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:09.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:09.933: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 7 14:01:09.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:10.024: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:10.024: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 7 14:01:10.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:10.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:10.155: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 7 14:01:10.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2041' Jun 7 14:01:10.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:01:10.339: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:01:10.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2041" for this suite. Jun 7 14:01:52.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:01:52.458: INFO: namespace kubectl-2041 deletion completed in 42.094224574s • [SLOW TEST:55.059 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:01:52.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 14:01:52.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6353' Jun 7 14:01:52.622: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 14:01:52.623: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 7 14:01:54.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6353' Jun 7 14:01:54.870: INFO: stderr: "" Jun 7 14:01:54.870: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:01:54.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6353" for this suite. Jun 7 14:03:16.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:03:17.015: INFO: namespace kubectl-6353 deletion completed in 1m22.140329314s • [SLOW TEST:84.556 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:03:17.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:03:17.085: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:03:21.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2557" for this suite. Jun 7 14:04:03.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:04:03.223: INFO: namespace pods-2557 deletion completed in 42.101098913s • [SLOW TEST:46.208 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:04:03.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 7 14:04:03.312: INFO: Waiting up to 5m0s for pod "client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76" in namespace "containers-8832" to be "success or failure" Jun 7 14:04:03.322: INFO: Pod "client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311973ms Jun 7 14:04:05.325: INFO: Pod "client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013538733s Jun 7 14:04:07.329: INFO: Pod "client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017599796s STEP: Saw pod success Jun 7 14:04:07.329: INFO: Pod "client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76" satisfied condition "success or failure" Jun 7 14:04:07.332: INFO: Trying to get logs from node iruya-worker2 pod client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76 container test-container: STEP: delete the pod Jun 7 14:04:07.347: INFO: Waiting for pod client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76 to disappear Jun 7 14:04:07.392: INFO: Pod client-containers-bedad06e-2a4c-4e84-aa8a-3373e7408f76 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:04:07.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8832" for this suite. Jun 7 14:04:13.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:04:13.484: INFO: namespace containers-8832 deletion completed in 6.088240407s • [SLOW TEST:10.260 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:04:13.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2401/configmap-test-a8d77221-6096-4dfa-acfc-ae331d881ad2 STEP: Creating a pod to test consume configMaps Jun 7 14:04:13.563: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556" in namespace "configmap-2401" to be "success or failure" Jun 7 14:04:13.567: INFO: Pod "pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129316ms Jun 7 14:04:15.651: INFO: Pod "pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087924055s Jun 7 14:04:17.674: INFO: Pod "pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111532814s STEP: Saw pod success Jun 7 14:04:17.674: INFO: Pod "pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556" satisfied condition "success or failure" Jun 7 14:04:17.677: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556 container env-test: STEP: delete the pod Jun 7 14:04:17.709: INFO: Waiting for pod pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556 to disappear Jun 7 14:04:17.723: INFO: Pod pod-configmaps-7ec39357-101e-4e73-9fa6-f6c61f30f556 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:04:17.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2401" for this suite. Jun 7 14:04:23.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:04:23.826: INFO: namespace configmap-2401 deletion completed in 6.096137017s • [SLOW TEST:10.342 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:04:23.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:04:23.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff" in namespace "downward-api-7905" to be "success or failure" Jun 7 14:04:23.890: INFO: Pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.148233ms Jun 7 14:04:26.183: INFO: Pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296111598s Jun 7 14:04:28.201: INFO: Pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313965425s Jun 7 14:04:30.205: INFO: Pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318039186s STEP: Saw pod success Jun 7 14:04:30.205: INFO: Pod "downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff" satisfied condition "success or failure" Jun 7 14:04:30.208: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff container client-container: STEP: delete the pod Jun 7 14:04:30.228: INFO: Waiting for pod downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff to disappear Jun 7 14:04:30.232: INFO: Pod downwardapi-volume-a7c3cdb1-e8be-4c79-b7c0-394f8fd717ff no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:04:30.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7905" for this suite. Jun 7 14:04:36.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:04:36.362: INFO: namespace downward-api-7905 deletion completed in 6.127333747s • [SLOW TEST:12.536 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:04:36.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:04:40.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6344" for this suite. Jun 7 14:05:20.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:05:20.547: INFO: namespace kubelet-test-6344 deletion completed in 40.09902974s • [SLOW TEST:44.184 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:05:20.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 7 14:05:20.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160940,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 14:05:20.679: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160941,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 7 14:05:20.679: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160942,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 7 14:05:30.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160965,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 14:05:30.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160966,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 7 14:05:30.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4667,SelfLink:/api/v1/namespaces/watch-4667/configmaps/e2e-watch-test-label-changed,UID:2d71b5dd-a9e3-4bbf-8004-8eff2eb19e4d,ResourceVersion:15160967,Generation:0,CreationTimestamp:2020-06-07 14:05:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:05:30.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4667" for this suite. Jun 7 14:05:36.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:05:36.827: INFO: namespace watch-4667 deletion completed in 6.101521066s • [SLOW TEST:16.280 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:05:36.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:05:36.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2948" for this suite. Jun 7 14:05:42.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:05:43.023: INFO: namespace services-2948 deletion completed in 6.09644988s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.195 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:05:43.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8355/secret-test-3f570f3a-fc3f-436f-9478-bb8182c6e642 STEP: Creating a pod to test consume secrets Jun 7 14:05:43.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334" in namespace "secrets-8355" to be "success or failure" Jun 7 14:05:43.133: INFO: Pod "pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334": Phase="Pending", Reason="", readiness=false. Elapsed: 7.706266ms Jun 7 14:05:45.137: INFO: Pod "pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012313377s Jun 7 14:05:47.142: INFO: Pod "pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016749551s STEP: Saw pod success Jun 7 14:05:47.142: INFO: Pod "pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334" satisfied condition "success or failure" Jun 7 14:05:47.145: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334 container env-test: STEP: delete the pod Jun 7 14:05:47.176: INFO: Waiting for pod pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334 to disappear Jun 7 14:05:47.181: INFO: Pod pod-configmaps-8ac3e9d2-9fab-4b34-b537-4fa82db0f334 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:05:47.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8355" for this suite. Jun 7 14:05:53.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:05:53.287: INFO: namespace secrets-8355 deletion completed in 6.103415398s • [SLOW TEST:10.264 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:05:53.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:05:58.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1844" for this suite. Jun 7 14:06:04.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:06:04.919: INFO: namespace watch-1844 deletion completed in 6.174111603s • [SLOW TEST:11.632 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:06:04.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 7 14:06:05.025: INFO: Waiting up to 5m0s for pod "client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4" in namespace "containers-5627" to be "success or failure" Jun 7 14:06:05.030: INFO: Pod "client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.849089ms Jun 7 14:06:07.071: INFO: Pod "client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045699696s Jun 7 14:06:09.075: INFO: Pod "client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049249722s STEP: Saw pod success Jun 7 14:06:09.075: INFO: Pod "client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4" satisfied condition "success or failure" Jun 7 14:06:09.077: INFO: Trying to get logs from node iruya-worker pod client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4 container test-container: STEP: delete the pod Jun 7 14:06:09.144: INFO: Waiting for pod client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4 to disappear Jun 7 14:06:09.151: INFO: Pod client-containers-5be93a82-61e3-411f-a6b9-c72439288cb4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:06:09.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5627" for this suite. Jun 7 14:06:15.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:06:15.241: INFO: namespace containers-5627 deletion completed in 6.08566781s • [SLOW TEST:10.322 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:06:15.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:06:15.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 7 14:06:15.468: INFO: stderr: "" Jun 7 14:06:15.468: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:06:15.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8625" for this suite. Jun 7 14:06:21.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:06:21.643: INFO: namespace kubectl-8625 deletion completed in 6.170069218s • [SLOW TEST:6.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:06:21.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 7 14:06:21.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 7 14:06:22.207: INFO: stderr: "" Jun 7 14:06:22.207: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:06:22.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2074" for this suite. Jun 7 14:06:28.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:06:28.339: INFO: namespace kubectl-2074 deletion completed in 6.128612611s • [SLOW TEST:6.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:06:28.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:06:28.435: INFO: Creating deployment "test-recreate-deployment" Jun 7 14:06:28.439: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 7 14:06:28.464: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 7 14:06:30.473: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 7 14:06:30.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727135588, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727135588, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727135588, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727135588, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 14:06:32.481: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 7 14:06:32.489: INFO: Updating deployment test-recreate-deployment Jun 7 14:06:32.489: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 7 14:06:32.719: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2473,SelfLink:/apis/apps/v1/namespaces/deployment-2473/deployments/test-recreate-deployment,UID:241631eb-193b-45b4-b40d-f067047181d3,ResourceVersion:15161326,Generation:2,CreationTimestamp:2020-06-07 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-07 14:06:32 +0000 UTC 2020-06-07 14:06:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-07 14:06:32 +0000 UTC 2020-06-07 14:06:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 7 14:06:32.723: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2473,SelfLink:/apis/apps/v1/namespaces/deployment-2473/replicasets/test-recreate-deployment-5c8c9cc69d,UID:883b2c79-f70e-46aa-a473-49a3d44a6d88,ResourceVersion:15161323,Generation:1,CreationTimestamp:2020-06-07 14:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 241631eb-193b-45b4-b40d-f067047181d3 0xc0017a17f7 0xc0017a17f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 14:06:32.723: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 7 14:06:32.723: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2473,SelfLink:/apis/apps/v1/namespaces/deployment-2473/replicasets/test-recreate-deployment-6df85df6b9,UID:7f994b1c-250a-4068-80d1-0e835cbb8a32,ResourceVersion:15161315,Generation:2,CreationTimestamp:2020-06-07 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 241631eb-193b-45b4-b40d-f067047181d3 0xc0017a18c7 0xc0017a18c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 14:06:32.727: INFO: Pod "test-recreate-deployment-5c8c9cc69d-tqrrr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-tqrrr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2473,SelfLink:/api/v1/namespaces/deployment-2473/pods/test-recreate-deployment-5c8c9cc69d-tqrrr,UID:0600344f-3df6-4229-8349-f916e4ad74b0,ResourceVersion:15161327,Generation:0,CreationTimestamp:2020-06-07 14:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 883b2c79-f70e-46aa-a473-49a3d44a6d88 0xc000d5e3d7 0xc000d5e3d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtgnm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtgnm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jtgnm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d5e4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d5e510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:06:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:06:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:06:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:06:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-07 14:06:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:06:32.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2473" for this suite. Jun 7 14:06:38.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:06:38.809: INFO: namespace deployment-2473 deletion completed in 6.078938338s • [SLOW TEST:10.470 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:06:38.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:06:43.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5493" for this suite. Jun 7 14:07:05.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:07:06.012: INFO: namespace replication-controller-5493 deletion completed in 22.085498823s • [SLOW TEST:27.202 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:07:06.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:07:40.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3618" for this suite. Jun 7 14:07:46.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:07:46.464: INFO: namespace container-runtime-3618 deletion completed in 6.132271585s • [SLOW TEST:40.451 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:07:46.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 7 14:07:54.326: INFO: 7 pods remaining Jun 7 14:07:54.327: INFO: 0 pods has nil DeletionTimestamp Jun 7 14:07:54.327: INFO: Jun 7 14:07:55.414: INFO: 0 pods remaining Jun 7 14:07:55.414: INFO: 0 pods has nil DeletionTimestamp Jun 7 14:07:55.414: INFO: STEP: Gathering metrics W0607 14:07:55.792932 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 14:07:55.793: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:07:55.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6598" for this suite. Jun 7 14:08:01.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:08:02.031: INFO: namespace gc-6598 deletion completed in 6.235059832s • [SLOW TEST:15.565 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:08:02.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 7 14:08:02.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4214' Jun 7 14:08:05.618: INFO: stderr: "" Jun 7 14:08:05.618: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 14:08:05.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4214' Jun 7 14:08:05.722: INFO: stderr: "" Jun 7 14:08:05.723: INFO: stdout: "update-demo-nautilus-2w8fz update-demo-nautilus-smrjf " Jun 7 14:08:05.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2w8fz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:05.838: INFO: stderr: "" Jun 7 14:08:05.838: INFO: stdout: "" Jun 7 14:08:05.838: INFO: update-demo-nautilus-2w8fz is created but not running Jun 7 14:08:10.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4214' Jun 7 14:08:10.933: INFO: stderr: "" Jun 7 14:08:10.933: INFO: stdout: "update-demo-nautilus-2w8fz update-demo-nautilus-smrjf " Jun 7 14:08:10.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2w8fz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:11.029: INFO: stderr: "" Jun 7 14:08:11.029: INFO: stdout: "true" Jun 7 14:08:11.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2w8fz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:11.129: INFO: stderr: "" Jun 7 14:08:11.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 14:08:11.129: INFO: validating pod update-demo-nautilus-2w8fz Jun 7 14:08:11.134: INFO: got data: { "image": "nautilus.jpg" } Jun 7 14:08:11.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 14:08:11.134: INFO: update-demo-nautilus-2w8fz is verified up and running Jun 7 14:08:11.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smrjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:11.234: INFO: stderr: "" Jun 7 14:08:11.234: INFO: stdout: "true" Jun 7 14:08:11.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smrjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:11.330: INFO: stderr: "" Jun 7 14:08:11.330: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 14:08:11.330: INFO: validating pod update-demo-nautilus-smrjf Jun 7 14:08:11.334: INFO: got data: { "image": "nautilus.jpg" } Jun 7 14:08:11.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 14:08:11.334: INFO: update-demo-nautilus-smrjf is verified up and running STEP: rolling-update to new replication controller Jun 7 14:08:11.336: INFO: scanned /root for discovery docs: Jun 7 14:08:11.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4214' Jun 7 14:08:33.966: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 7 14:08:33.966: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 14:08:33.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4214' Jun 7 14:08:34.059: INFO: stderr: "" Jun 7 14:08:34.059: INFO: stdout: "update-demo-kitten-4xxjp update-demo-kitten-thzjs " Jun 7 14:08:34.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4xxjp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:34.146: INFO: stderr: "" Jun 7 14:08:34.146: INFO: stdout: "true" Jun 7 14:08:34.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4xxjp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:34.232: INFO: stderr: "" Jun 7 14:08:34.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 7 14:08:34.232: INFO: validating pod update-demo-kitten-4xxjp Jun 7 14:08:34.250: INFO: got data: { "image": "kitten.jpg" } Jun 7 14:08:34.250: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 7 14:08:34.250: INFO: update-demo-kitten-4xxjp is verified up and running Jun 7 14:08:34.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thzjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:34.347: INFO: stderr: "" Jun 7 14:08:34.347: INFO: stdout: "true" Jun 7 14:08:34.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thzjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4214' Jun 7 14:08:34.447: INFO: stderr: "" Jun 7 14:08:34.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 7 14:08:34.447: INFO: validating pod update-demo-kitten-thzjs Jun 7 14:08:34.459: INFO: got data: { "image": "kitten.jpg" } Jun 7 14:08:34.459: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 7 14:08:34.459: INFO: update-demo-kitten-thzjs is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:08:34.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4214" for this suite. Jun 7 14:08:56.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:08:56.558: INFO: namespace kubectl-4214 deletion completed in 22.095393176s • [SLOW TEST:54.527 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:08:56.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 7 14:08:56.629: INFO: Waiting up to 5m0s for pod "pod-c79a487c-95a9-4b26-9b5d-1e228523749f" in namespace "emptydir-3578" to be "success or failure" Jun 7 14:08:56.632: INFO: Pod "pod-c79a487c-95a9-4b26-9b5d-1e228523749f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.905812ms Jun 7 14:08:58.637: INFO: Pod "pod-c79a487c-95a9-4b26-9b5d-1e228523749f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007642206s Jun 7 14:09:00.641: INFO: Pod "pod-c79a487c-95a9-4b26-9b5d-1e228523749f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011827148s STEP: Saw pod success Jun 7 14:09:00.641: INFO: Pod "pod-c79a487c-95a9-4b26-9b5d-1e228523749f" satisfied condition "success or failure" Jun 7 14:09:00.644: INFO: Trying to get logs from node iruya-worker pod pod-c79a487c-95a9-4b26-9b5d-1e228523749f container test-container: STEP: delete the pod Jun 7 14:09:00.676: INFO: Waiting for pod pod-c79a487c-95a9-4b26-9b5d-1e228523749f to disappear Jun 7 14:09:00.680: INFO: Pod pod-c79a487c-95a9-4b26-9b5d-1e228523749f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:09:00.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3578" for this suite. Jun 7 14:09:06.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:09:06.801: INFO: namespace emptydir-3578 deletion completed in 6.117907978s • [SLOW TEST:10.243 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:09:06.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 14:09:10.893: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:09:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3371" for this suite. Jun 7 14:09:16.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:09:17.038: INFO: namespace container-runtime-3371 deletion completed in 6.124450674s • [SLOW TEST:10.235 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:09:17.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 7 14:09:17.099: INFO: Waiting up to 5m0s for pod "pod-977228b4-6b87-49f9-83f0-63d8453d88dc" in namespace "emptydir-4820" to be "success or failure" Jun 7 14:09:17.103: INFO: Pod "pod-977228b4-6b87-49f9-83f0-63d8453d88dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694503ms Jun 7 14:09:19.134: INFO: Pod "pod-977228b4-6b87-49f9-83f0-63d8453d88dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035535073s Jun 7 14:09:21.139: INFO: Pod "pod-977228b4-6b87-49f9-83f0-63d8453d88dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040271662s STEP: Saw pod success Jun 7 14:09:21.139: INFO: Pod "pod-977228b4-6b87-49f9-83f0-63d8453d88dc" satisfied condition "success or failure" Jun 7 14:09:21.143: INFO: Trying to get logs from node iruya-worker pod pod-977228b4-6b87-49f9-83f0-63d8453d88dc container test-container: STEP: delete the pod Jun 7 14:09:21.175: INFO: Waiting for pod pod-977228b4-6b87-49f9-83f0-63d8453d88dc to disappear Jun 7 14:09:21.186: INFO: Pod pod-977228b4-6b87-49f9-83f0-63d8453d88dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:09:21.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4820" for this suite. Jun 7 14:09:27.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:09:27.311: INFO: namespace emptydir-4820 deletion completed in 6.121231031s • [SLOW TEST:10.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:09:27.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:09:27.410: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 7 14:09:27.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:27.457: INFO: Number of nodes with available pods: 0 Jun 7 14:09:27.457: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:09:28.462: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:28.466: INFO: Number of nodes with available pods: 0 Jun 7 14:09:28.466: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:09:29.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:29.499: INFO: Number of nodes with available pods: 0 Jun 7 14:09:29.499: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:09:30.461: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:30.463: INFO: Number of nodes with available pods: 0 Jun 7 14:09:30.463: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:09:31.468: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:31.471: INFO: Number of nodes with available pods: 1 Jun 7 14:09:31.471: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:09:32.462: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:32.466: INFO: Number of nodes with available pods: 2 Jun 7 14:09:32.466: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 7 14:09:32.501: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:32.501: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:32.522: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:33.535: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:33.535: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:33.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:34.525: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:34.525: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:34.529: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:35.526: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:35.526: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:35.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:35.530: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:36.526: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:36.527: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:36.527: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:36.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:37.528: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:37.528: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:37.528: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:37.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:38.526: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:38.526: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:38.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:38.529: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:39.526: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:39.526: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:39.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:39.530: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:40.526: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:40.526: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:40.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:40.529: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:41.525: INFO: Wrong image for pod: daemon-set-ls2sz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:41.525: INFO: Pod daemon-set-ls2sz is not available Jun 7 14:09:41.525: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:41.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:42.527: INFO: Pod daemon-set-5nz9h is not available Jun 7 14:09:42.527: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:42.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:43.526: INFO: Pod daemon-set-5nz9h is not available Jun 7 14:09:43.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:43.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:44.526: INFO: Pod daemon-set-5nz9h is not available Jun 7 14:09:44.527: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:44.530: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:45.525: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:45.529: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:46.527: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:46.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:47.526: INFO: Wrong image for pod: daemon-set-zldg2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 7 14:09:47.526: INFO: Pod daemon-set-zldg2 is not available Jun 7 14:09:47.529: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:48.527: INFO: Pod daemon-set-9fhq9 is not available Jun 7 14:09:48.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 7 14:09:48.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:48.538: INFO: Number of nodes with available pods: 1 Jun 7 14:09:48.538: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:09:49.543: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:49.552: INFO: Number of nodes with available pods: 1 Jun 7 14:09:49.552: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:09:50.544: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:50.548: INFO: Number of nodes with available pods: 1 Jun 7 14:09:50.548: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:09:51.543: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:09:51.559: INFO: Number of nodes with available pods: 2 Jun 7 14:09:51.559: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-221, will wait for the garbage collector to delete the pods Jun 7 14:09:51.643: INFO: Deleting DaemonSet.extensions daemon-set took: 7.027271ms Jun 7 14:09:51.943: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.294615ms Jun 7 14:10:01.946: INFO: Number of nodes with available pods: 0 Jun 7 14:10:01.946: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 14:10:01.949: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-221/daemonsets","resourceVersion":"15162285"},"items":null} Jun 7 14:10:01.952: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-221/pods","resourceVersion":"15162285"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:10:01.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-221" for this suite. Jun 7 14:10:07.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:10:08.071: INFO: namespace daemonsets-221 deletion completed in 6.105632895s • [SLOW TEST:40.760 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:10:08.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:10:08.112: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:10:12.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4212" for this suite. Jun 7 14:10:52.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:10:52.381: INFO: namespace pods-4212 deletion completed in 40.093322818s • [SLOW TEST:44.309 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:10:52.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:10:56.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-378" for this suite. Jun 7 14:11:36.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:11:36.591: INFO: namespace kubelet-test-378 deletion completed in 40.098395702s • [SLOW TEST:44.208 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:11:36.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:12:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4309" for this suite. Jun 7 14:12:58.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:12:58.787: INFO: namespace container-probe-4309 deletion completed in 22.105522636s • [SLOW TEST:82.195 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:12:58.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 14:13:02.971: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:13:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2033" for this suite. Jun 7 14:13:09.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:13:09.125: INFO: namespace container-runtime-2033 deletion completed in 6.134529324s • [SLOW TEST:10.337 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:13:09.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 7 14:13:09.180: INFO: namespace kubectl-610 Jun 7 14:13:09.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-610' Jun 7 14:13:09.413: INFO: stderr: "" Jun 7 14:13:09.413: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 7 14:13:10.417: INFO: Selector matched 1 pods for map[app:redis] Jun 7 14:13:10.417: INFO: Found 0 / 1 Jun 7 14:13:11.418: INFO: Selector matched 1 pods for map[app:redis] Jun 7 14:13:11.418: INFO: Found 0 / 1 Jun 7 14:13:12.436: INFO: Selector matched 1 pods for map[app:redis] Jun 7 14:13:12.436: INFO: Found 1 / 1 Jun 7 14:13:12.436: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 7 14:13:12.438: INFO: Selector matched 1 pods for map[app:redis] Jun 7 14:13:12.438: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 14:13:12.438: INFO: wait on redis-master startup in kubectl-610 Jun 7 14:13:12.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqgqv redis-master --namespace=kubectl-610' Jun 7 14:13:12.552: INFO: stderr: "" Jun 7 14:13:12.552: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jun 14:13:12.063 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jun 14:13:12.063 # Server started, Redis version 3.2.12\n1:M 07 Jun 14:13:12.063 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jun 14:13:12.063 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 7 14:13:12.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-610' Jun 7 14:13:12.681: INFO: stderr: "" Jun 7 14:13:12.681: INFO: stdout: "service/rm2 exposed\n" Jun 7 14:13:12.684: INFO: Service rm2 in namespace kubectl-610 found. STEP: exposing service Jun 7 14:13:14.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-610' Jun 7 14:13:14.820: INFO: stderr: "" Jun 7 14:13:14.820: INFO: stdout: "service/rm3 exposed\n" Jun 7 14:13:14.850: INFO: Service rm3 in namespace kubectl-610 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:13:16.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-610" for this suite. Jun 7 14:13:40.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:13:40.970: INFO: namespace kubectl-610 deletion completed in 24.109839572s • [SLOW TEST:31.845 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:13:40.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 7 14:13:41.123: INFO: Waiting up to 5m0s for pod "downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e" in namespace "downward-api-3403" to be "success or failure" Jun 7 14:13:41.139: INFO: Pod "downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.510621ms Jun 7 14:13:43.144: INFO: Pod "downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020660721s Jun 7 14:13:45.148: INFO: Pod "downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025181352s STEP: Saw pod success Jun 7 14:13:45.148: INFO: Pod "downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e" satisfied condition "success or failure" Jun 7 14:13:45.152: INFO: Trying to get logs from node iruya-worker2 pod downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e container dapi-container: STEP: delete the pod Jun 7 14:13:45.192: INFO: Waiting for pod downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e to disappear Jun 7 14:13:45.222: INFO: Pod downward-api-8b1ddb60-4345-4071-ac51-42f795687f8e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:13:45.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3403" for this suite. Jun 7 14:13:51.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:13:51.336: INFO: namespace downward-api-3403 deletion completed in 6.090801709s • [SLOW TEST:10.366 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:13:51.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 7 14:13:51.410: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9094" to be "success or failure" Jun 7 14:13:51.473: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 62.841815ms Jun 7 14:13:53.478: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067567583s Jun 7 14:13:55.481: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071128586s Jun 7 14:13:57.484: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074059763s STEP: Saw pod success Jun 7 14:13:57.484: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 7 14:13:57.487: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 7 14:13:57.516: INFO: Waiting for pod pod-host-path-test to disappear Jun 7 14:13:57.524: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:13:57.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9094" for this suite. Jun 7 14:14:03.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:03.612: INFO: namespace hostpath-9094 deletion completed in 6.08565685s • [SLOW TEST:12.275 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:03.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:14:03.724: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:14:04.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1189" for this suite. Jun 7 14:14:10.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:10.888: INFO: namespace custom-resource-definition-1189 deletion completed in 6.092744136s • [SLOW TEST:7.275 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:10.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ec8c2ba5-2875-4a26-a8b5-977c20c58efa STEP: Creating a pod to test consume configMaps Jun 7 14:14:10.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28" in namespace "projected-9950" to be "success or failure" Jun 7 14:14:10.967: INFO: Pod "pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28": Phase="Pending", Reason="", readiness=false. Elapsed: 14.865048ms Jun 7 14:14:12.971: INFO: Pod "pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018817746s Jun 7 14:14:14.976: INFO: Pod "pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023532513s STEP: Saw pod success Jun 7 14:14:14.976: INFO: Pod "pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28" satisfied condition "success or failure" Jun 7 14:14:14.979: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28 container projected-configmap-volume-test: STEP: delete the pod Jun 7 14:14:15.009: INFO: Waiting for pod pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28 to disappear Jun 7 14:14:15.027: INFO: Pod pod-projected-configmaps-3677f86e-559e-4b88-841f-aaac69242d28 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:14:15.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9950" for this suite. Jun 7 14:14:21.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:21.156: INFO: namespace projected-9950 deletion completed in 6.125003362s • [SLOW TEST:10.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:21.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:14:21.325: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9cc07971-9c16-4d61-9951-004753a1a9d2", Controller:(*bool)(0xc0036919aa), BlockOwnerDeletion:(*bool)(0xc0036919ab)}} Jun 7 14:14:21.377: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"97e33eb8-8a1b-40b8-90d7-2c5d38472792", Controller:(*bool)(0xc0030e84ca), BlockOwnerDeletion:(*bool)(0xc0030e84cb)}} Jun 7 14:14:21.388: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c0d49c7c-19a9-464b-b6ae-d03f47673c3e", Controller:(*bool)(0xc0030e865a), BlockOwnerDeletion:(*bool)(0xc0030e865b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:14:26.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-983" for this suite. Jun 7 14:14:32.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:32.520: INFO: namespace gc-983 deletion completed in 6.103728504s • [SLOW TEST:11.364 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:32.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 7 14:14:32.571: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 14:14:32.624: INFO: Waiting for terminating namespaces to be deleted... Jun 7 14:14:32.628: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 7 14:14:32.635: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.635: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 14:14:32.635: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.635: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 14:14:32.635: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 7 14:14:32.640: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.640: INFO: Container coredns ready: true, restart count 0 Jun 7 14:14:32.640: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.640: INFO: Container coredns ready: true, restart count 0 Jun 7 14:14:32.640: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.640: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 14:14:32.640: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 7 14:14:32.640: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 7 14:14:32.700: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 7 14:14:32.700: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 7 14:14:32.700: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 7 14:14:32.700: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 7 14:14:32.700: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 7 14:14:32.700: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245.1616488945f2ef31], Reason = [Scheduled], Message = [Successfully assigned sched-pred-84/filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245.1616488991c134fb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245.16164889f440cdcd], Reason = [Created], Message = [Created container filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245] STEP: Considering event: Type = [Normal], Name = [filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245.1616488a0be190b1], Reason = [Started], Message = [Started container filler-pod-6e8d3ef3-65ec-4541-b35f-f922661b3245] STEP: Considering event: Type = [Normal], Name = [filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69.16164889462bff7f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-84/filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69.16164889c95d803c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69.1616488a0fa4a82f], Reason = [Created], Message = [Created container filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69] STEP: Considering event: Type = [Normal], Name = [filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69.1616488a1edaeb09], Reason = [Started], Message = [Started container filler-pod-717dbcf0-c3d3-4bdf-a3aa-0a0ad4093c69] STEP: Considering event: Type = [Warning], Name = [additional-pod.1616488aad4fc00e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:14:39.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-84" for this suite. Jun 7 14:14:45.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:45.955: INFO: namespace sched-pred-84 deletion completed in 6.097150999s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.434 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:45.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:14:46.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68" in namespace "downward-api-6943" to be "success or failure" Jun 7 14:14:46.206: INFO: Pod "downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620898ms Jun 7 14:14:48.210: INFO: Pod "downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014538721s Jun 7 14:14:50.215: INFO: Pod "downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01928966s STEP: Saw pod success Jun 7 14:14:50.215: INFO: Pod "downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68" satisfied condition "success or failure" Jun 7 14:14:50.218: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68 container client-container: STEP: delete the pod Jun 7 14:14:50.237: INFO: Waiting for pod downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68 to disappear Jun 7 14:14:50.242: INFO: Pod downwardapi-volume-f749808f-3f7d-47b0-9108-4d4cb66dcb68 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:14:50.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6943" for this suite. Jun 7 14:14:56.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:14:56.331: INFO: namespace downward-api-6943 deletion completed in 6.086625015s • [SLOW TEST:10.376 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:14:56.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 7 14:14:56.394: INFO: Waiting up to 5m0s for pod "pod-8aafca28-cd8c-4901-84a1-c225c88875e0" in namespace "emptydir-934" to be "success or failure" Jun 7 14:14:56.398: INFO: Pod "pod-8aafca28-cd8c-4901-84a1-c225c88875e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.548168ms Jun 7 14:14:58.403: INFO: Pod "pod-8aafca28-cd8c-4901-84a1-c225c88875e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008310091s Jun 7 14:15:00.408: INFO: Pod "pod-8aafca28-cd8c-4901-84a1-c225c88875e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013076639s STEP: Saw pod success Jun 7 14:15:00.408: INFO: Pod "pod-8aafca28-cd8c-4901-84a1-c225c88875e0" satisfied condition "success or failure" Jun 7 14:15:00.411: INFO: Trying to get logs from node iruya-worker pod pod-8aafca28-cd8c-4901-84a1-c225c88875e0 container test-container: STEP: delete the pod Jun 7 14:15:00.464: INFO: Waiting for pod pod-8aafca28-cd8c-4901-84a1-c225c88875e0 to disappear Jun 7 14:15:00.471: INFO: Pod pod-8aafca28-cd8c-4901-84a1-c225c88875e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:15:00.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-934" for this suite. Jun 7 14:15:06.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:15:06.580: INFO: namespace emptydir-934 deletion completed in 6.105824023s • [SLOW TEST:10.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:15:06.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 14:15:06.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7921' Jun 7 14:15:06.749: INFO: stderr: "" Jun 7 14:15:06.749: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 7 14:15:11.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7921 -o json' Jun 7 14:15:11.898: INFO: stderr: "" Jun 7 14:15:11.898: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-07T14:15:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7921\",\n \"resourceVersion\": \"15163326\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7921/pods/e2e-test-nginx-pod\",\n \"uid\": \"0871a01a-8926-4c0d-ad38-f242ce384fc9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-w9jwl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-w9jwl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-w9jwl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T14:15:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T14:15:09Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T14:15:09Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T14:15:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://77b0cd83155302eaa5f33e1f4c966b604f8c5005b2af26a5275e9d7baebbd42d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-07T14:15:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.225\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-07T14:15:06Z\"\n }\n}\n" STEP: replace the image in the pod Jun 7 14:15:11.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7921' Jun 7 14:15:12.195: INFO: stderr: "" Jun 7 14:15:12.195: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 7 14:15:12.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7921' Jun 7 14:15:21.875: INFO: stderr: "" Jun 7 14:15:21.875: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:15:21.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7921" for this suite. Jun 7 14:15:27.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:15:28.025: INFO: namespace kubectl-7921 deletion completed in 6.147224891s • [SLOW TEST:21.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:15:28.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-5c4a3566-a86c-4802-984e-d623336a567e STEP: Creating a pod to test consume secrets Jun 7 14:15:28.111: INFO: Waiting up to 5m0s for pod "pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4" in namespace "secrets-5215" to be "success or failure" Jun 7 14:15:28.118: INFO: Pod "pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252687ms Jun 7 14:15:30.169: INFO: Pod "pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057908511s Jun 7 14:15:32.172: INFO: Pod "pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060742892s STEP: Saw pod success Jun 7 14:15:32.172: INFO: Pod "pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4" satisfied condition "success or failure" Jun 7 14:15:32.174: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4 container secret-volume-test: STEP: delete the pod Jun 7 14:15:32.191: INFO: Waiting for pod pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4 to disappear Jun 7 14:15:32.195: INFO: Pod pod-secrets-f1f3a2b2-e182-4a79-a243-a3d0a4a64de4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:15:32.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5215" for this suite. Jun 7 14:15:38.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:15:38.296: INFO: namespace secrets-5215 deletion completed in 6.097885551s • [SLOW TEST:10.271 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:15:38.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 7 14:15:46.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:46.418: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:48.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:48.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:50.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:50.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:52.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:52.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:54.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:54.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:56.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:56.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:15:58.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:15:58.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:00.419: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:00.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:02.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:02.422: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:04.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:04.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:06.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:06.430: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:08.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:08.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:10.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:10.423: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 14:16:12.418: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 14:16:12.423: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:16:12.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2034" for this suite. Jun 7 14:16:46.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:16:46.536: INFO: namespace container-lifecycle-hook-2034 deletion completed in 34.10081453s • [SLOW TEST:68.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:16:46.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 7 14:16:54.679: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:16:54.703: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:16:56.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:16:56.706: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:16:58.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:16:58.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:00.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:00.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:02.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:02.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:04.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:04.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:06.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:06.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:08.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:08.708: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:10.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:10.707: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:12.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:12.708: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 14:17:14.703: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 14:17:14.707: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:17:14.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4907" for this suite. Jun 7 14:17:36.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:17:36.820: INFO: namespace container-lifecycle-hook-4907 deletion completed in 22.107900154s • [SLOW TEST:50.284 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:17:36.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0607 14:18:17.286305 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 14:18:17.286: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:18:17.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1256" for this suite. Jun 7 14:18:25.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:18:25.401: INFO: namespace gc-1256 deletion completed in 8.11225047s • [SLOW TEST:48.581 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:18:25.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 7 14:18:27.128: INFO: Pod name wrapped-volume-race-da35d164-28fd-4766-ac94-9e2c62624d32: Found 0 pods out of 5 Jun 7 14:18:32.139: INFO: Pod name wrapped-volume-race-da35d164-28fd-4766-ac94-9e2c62624d32: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-da35d164-28fd-4766-ac94-9e2c62624d32 in namespace emptydir-wrapper-3926, will wait for the garbage collector to delete the pods Jun 7 14:18:46.238: INFO: Deleting ReplicationController wrapped-volume-race-da35d164-28fd-4766-ac94-9e2c62624d32 took: 8.537123ms Jun 7 14:18:46.538: INFO: Terminating ReplicationController wrapped-volume-race-da35d164-28fd-4766-ac94-9e2c62624d32 pods took: 300.285374ms STEP: Creating RC which spawns configmap-volume pods Jun 7 14:19:32.674: INFO: Pod name wrapped-volume-race-4f434ab2-65b2-42b6-9756-692fd5fec4d8: Found 0 pods out of 5 Jun 7 14:19:37.683: INFO: Pod name wrapped-volume-race-4f434ab2-65b2-42b6-9756-692fd5fec4d8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4f434ab2-65b2-42b6-9756-692fd5fec4d8 in namespace emptydir-wrapper-3926, will wait for the garbage collector to delete the pods Jun 7 14:19:51.775: INFO: Deleting ReplicationController wrapped-volume-race-4f434ab2-65b2-42b6-9756-692fd5fec4d8 took: 12.464675ms Jun 7 14:19:52.075: INFO: Terminating ReplicationController wrapped-volume-race-4f434ab2-65b2-42b6-9756-692fd5fec4d8 pods took: 300.393245ms STEP: Creating RC which spawns configmap-volume pods Jun 7 14:20:33.306: INFO: Pod name wrapped-volume-race-7edb4d07-57cc-47ec-9ea7-389e8126b1ee: Found 0 pods out of 5 Jun 7 14:20:38.339: INFO: Pod name wrapped-volume-race-7edb4d07-57cc-47ec-9ea7-389e8126b1ee: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7edb4d07-57cc-47ec-9ea7-389e8126b1ee in namespace emptydir-wrapper-3926, will wait for the garbage collector to delete the pods Jun 7 14:20:52.443: INFO: Deleting ReplicationController wrapped-volume-race-7edb4d07-57cc-47ec-9ea7-389e8126b1ee took: 7.606091ms Jun 7 14:20:52.743: INFO: Terminating ReplicationController wrapped-volume-race-7edb4d07-57cc-47ec-9ea7-389e8126b1ee pods took: 300.364577ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:21:33.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3926" for this suite. Jun 7 14:21:41.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:21:41.898: INFO: namespace emptydir-wrapper-3926 deletion completed in 8.079751446s • [SLOW TEST:196.496 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:21:41.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:21:41.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9756" for this suite. Jun 7 14:22:04.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:04.116: INFO: namespace pods-9756 deletion completed in 22.108017104s • [SLOW TEST:22.218 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:04.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 7 14:22:04.192: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix178169948/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:04.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3852" for this suite. Jun 7 14:22:10.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:10.355: INFO: namespace kubectl-3852 deletion completed in 6.094332742s • [SLOW TEST:6.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:10.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:22:10.475: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.784563ms) Jun 7 14:22:10.478: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.862278ms) Jun 7 14:22:10.481: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.236629ms) Jun 7 14:22:10.484: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.044437ms) Jun 7 14:22:10.505: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.543208ms) Jun 7 14:22:10.509: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.442216ms) Jun 7 14:22:10.512: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.714906ms) Jun 7 14:22:10.516: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.684097ms) Jun 7 14:22:10.519: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.367593ms) Jun 7 14:22:10.523: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.845962ms) Jun 7 14:22:10.527: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.884177ms) Jun 7 14:22:10.531: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.878244ms) Jun 7 14:22:10.535: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.60684ms) Jun 7 14:22:10.538: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.281998ms) Jun 7 14:22:10.541: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.280786ms) Jun 7 14:22:10.544: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.103775ms) Jun 7 14:22:10.549: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.033357ms) Jun 7 14:22:10.552: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.330272ms) Jun 7 14:22:10.555: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.498545ms) Jun 7 14:22:10.558: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.988843ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:10.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4745" for this suite. Jun 7 14:22:16.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:16.668: INFO: namespace proxy-4745 deletion completed in 6.103700171s • [SLOW TEST:6.312 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:16.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:22:16.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386" in namespace "downward-api-6092" to be "success or failure" Jun 7 14:22:16.730: INFO: Pod "downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386": Phase="Pending", Reason="", readiness=false. Elapsed: 9.0615ms Jun 7 14:22:18.816: INFO: Pod "downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095224147s Jun 7 14:22:20.821: INFO: Pod "downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100045757s STEP: Saw pod success Jun 7 14:22:20.821: INFO: Pod "downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386" satisfied condition "success or failure" Jun 7 14:22:20.823: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386 container client-container: STEP: delete the pod Jun 7 14:22:20.851: INFO: Waiting for pod downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386 to disappear Jun 7 14:22:20.880: INFO: Pod downwardapi-volume-eee4e31d-1964-47d1-a82d-9a0e29849386 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:20.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6092" for this suite. Jun 7 14:22:26.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:27.111: INFO: namespace downward-api-6092 deletion completed in 6.227382434s • [SLOW TEST:10.442 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:27.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 7 14:22:27.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4617,SelfLink:/api/v1/namespaces/watch-4617/configmaps/e2e-watch-test-resource-version,UID:7d2e031c-b3c9-4cb9-8368-5f5486e3959b,ResourceVersion:15165404,Generation:0,CreationTimestamp:2020-06-07 14:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 14:22:27.240: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4617,SelfLink:/api/v1/namespaces/watch-4617/configmaps/e2e-watch-test-resource-version,UID:7d2e031c-b3c9-4cb9-8368-5f5486e3959b,ResourceVersion:15165405,Generation:0,CreationTimestamp:2020-06-07 14:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:27.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4617" for this suite. Jun 7 14:22:33.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:33.385: INFO: namespace watch-4617 deletion completed in 6.140985418s • [SLOW TEST:6.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:33.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 7 14:22:33.436: INFO: Waiting up to 5m0s for pod "client-containers-78864283-e604-403f-ac8f-92963049a6f8" in namespace "containers-1448" to be "success or failure" Jun 7 14:22:33.447: INFO: Pod "client-containers-78864283-e604-403f-ac8f-92963049a6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.645922ms Jun 7 14:22:35.452: INFO: Pod "client-containers-78864283-e604-403f-ac8f-92963049a6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015695085s Jun 7 14:22:37.456: INFO: Pod "client-containers-78864283-e604-403f-ac8f-92963049a6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019891362s STEP: Saw pod success Jun 7 14:22:37.456: INFO: Pod "client-containers-78864283-e604-403f-ac8f-92963049a6f8" satisfied condition "success or failure" Jun 7 14:22:37.459: INFO: Trying to get logs from node iruya-worker pod client-containers-78864283-e604-403f-ac8f-92963049a6f8 container test-container: STEP: delete the pod Jun 7 14:22:37.542: INFO: Waiting for pod client-containers-78864283-e604-403f-ac8f-92963049a6f8 to disappear Jun 7 14:22:37.586: INFO: Pod client-containers-78864283-e604-403f-ac8f-92963049a6f8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:37.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1448" for this suite. Jun 7 14:22:43.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:43.690: INFO: namespace containers-1448 deletion completed in 6.100127386s • [SLOW TEST:10.305 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:43.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:22:43.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d" in namespace "projected-1270" to be "success or failure" Jun 7 14:22:43.824: INFO: Pod "downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726868ms Jun 7 14:22:45.828: INFO: Pod "downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00624103s Jun 7 14:22:47.832: INFO: Pod "downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010283997s STEP: Saw pod success Jun 7 14:22:47.832: INFO: Pod "downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d" satisfied condition "success or failure" Jun 7 14:22:47.834: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d container client-container: STEP: delete the pod Jun 7 14:22:47.950: INFO: Waiting for pod downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d to disappear Jun 7 14:22:47.988: INFO: Pod downwardapi-volume-205d258e-1358-45f6-b274-55908e56ac5d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:22:47.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1270" for this suite. Jun 7 14:22:54.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:22:54.173: INFO: namespace projected-1270 deletion completed in 6.180990673s • [SLOW TEST:10.483 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:22:54.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 7 14:22:58.764: INFO: Successfully updated pod "annotationupdate0429efbe-9341-46e5-8819-3963ce57231c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:23:02.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-547" for this suite. Jun 7 14:23:24.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:23:24.900: INFO: namespace projected-547 deletion completed in 22.090891599s • [SLOW TEST:30.727 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:23:24.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:23:24.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b" in namespace "downward-api-6788" to be "success or failure" Jun 7 14:23:24.969: INFO: Pod "downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583941ms Jun 7 14:23:26.974: INFO: Pod "downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007500148s Jun 7 14:23:28.978: INFO: Pod "downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011494355s STEP: Saw pod success Jun 7 14:23:28.978: INFO: Pod "downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b" satisfied condition "success or failure" Jun 7 14:23:28.980: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b container client-container: STEP: delete the pod Jun 7 14:23:29.000: INFO: Waiting for pod downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b to disappear Jun 7 14:23:29.005: INFO: Pod downwardapi-volume-cb231a70-21b7-4972-bbbc-7b96f1a1135b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:23:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6788" for this suite. Jun 7 14:23:35.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:23:35.316: INFO: namespace downward-api-6788 deletion completed in 6.306343792s • [SLOW TEST:10.415 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:23:35.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-d83ed8ed-7bcf-4c6b-8ce7-fdb03eb344ec [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:23:35.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4414" for this suite. Jun 7 14:23:41.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:23:41.591: INFO: namespace configmap-4414 deletion completed in 6.180386829s • [SLOW TEST:6.275 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:23:41.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 7 14:23:41.706: INFO: Waiting up to 5m0s for pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea" in namespace "emptydir-3369" to be "success or failure" Jun 7 14:23:41.726: INFO: Pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea": Phase="Pending", Reason="", readiness=false. Elapsed: 19.549292ms Jun 7 14:23:43.730: INFO: Pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023551986s Jun 7 14:23:45.734: INFO: Pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea": Phase="Running", Reason="", readiness=true. Elapsed: 4.027885818s Jun 7 14:23:47.739: INFO: Pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032681489s STEP: Saw pod success Jun 7 14:23:47.739: INFO: Pod "pod-e0cd793c-a557-478c-9f37-8cfc17f00cea" satisfied condition "success or failure" Jun 7 14:23:47.742: INFO: Trying to get logs from node iruya-worker2 pod pod-e0cd793c-a557-478c-9f37-8cfc17f00cea container test-container: STEP: delete the pod Jun 7 14:23:47.811: INFO: Waiting for pod pod-e0cd793c-a557-478c-9f37-8cfc17f00cea to disappear Jun 7 14:23:47.816: INFO: Pod pod-e0cd793c-a557-478c-9f37-8cfc17f00cea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:23:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3369" for this suite. Jun 7 14:23:53.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:23:53.910: INFO: namespace emptydir-3369 deletion completed in 6.0915945s • [SLOW TEST:12.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:23:53.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 7 14:24:02.018: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:02.042: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 14:24:04.043: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:04.054: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 14:24:06.043: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:06.047: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 14:24:08.043: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:08.046: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 14:24:10.043: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:10.047: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 14:24:12.043: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 14:24:12.047: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:24:12.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9688" for this suite. Jun 7 14:24:34.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:24:34.213: INFO: namespace container-lifecycle-hook-9688 deletion completed in 22.157524879s • [SLOW TEST:40.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:24:34.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-53cb94af-3236-4607-9e2b-ea0ba1e9d5a2 STEP: Creating a pod to test consume configMaps Jun 7 14:24:34.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803" in namespace "configmap-7202" to be "success or failure" Jun 7 14:24:34.322: INFO: Pod "pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803": Phase="Pending", Reason="", readiness=false. Elapsed: 18.328904ms Jun 7 14:24:36.326: INFO: Pod "pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022382097s Jun 7 14:24:38.331: INFO: Pod "pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027078403s STEP: Saw pod success Jun 7 14:24:38.331: INFO: Pod "pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803" satisfied condition "success or failure" Jun 7 14:24:38.334: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803 container configmap-volume-test: STEP: delete the pod Jun 7 14:24:38.385: INFO: Waiting for pod pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803 to disappear Jun 7 14:24:38.410: INFO: Pod pod-configmaps-d7020cbe-50b8-4e3c-97a9-eb5a3edf5803 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:24:38.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7202" for this suite. Jun 7 14:24:44.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:24:44.527: INFO: namespace configmap-7202 deletion completed in 6.112869905s • [SLOW TEST:10.314 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:24:44.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 7 14:24:44.596: INFO: Waiting up to 5m0s for pod "var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732" in namespace "var-expansion-2551" to be "success or failure" Jun 7 14:24:44.599: INFO: Pod "var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732": Phase="Pending", Reason="", readiness=false. Elapsed: 3.119627ms Jun 7 14:24:46.603: INFO: Pod "var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006922901s Jun 7 14:24:48.608: INFO: Pod "var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011940407s STEP: Saw pod success Jun 7 14:24:48.608: INFO: Pod "var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732" satisfied condition "success or failure" Jun 7 14:24:48.612: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732 container dapi-container: STEP: delete the pod Jun 7 14:24:48.670: INFO: Waiting for pod var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732 to disappear Jun 7 14:24:48.683: INFO: Pod var-expansion-9a0d63bd-74aa-4439-9840-72c30d708732 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:24:48.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2551" for this suite. Jun 7 14:24:54.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:24:54.778: INFO: namespace var-expansion-2551 deletion completed in 6.092443177s • [SLOW TEST:10.250 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:24:54.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 7 14:24:59.374: INFO: Successfully updated pod "pod-update-activedeadlineseconds-92f98b92-d538-4eeb-84fc-b8d6a4f5c025" Jun 7 14:24:59.374: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-92f98b92-d538-4eeb-84fc-b8d6a4f5c025" in namespace "pods-5458" to be "terminated due to deadline exceeded" Jun 7 14:24:59.384: INFO: Pod "pod-update-activedeadlineseconds-92f98b92-d538-4eeb-84fc-b8d6a4f5c025": Phase="Running", Reason="", readiness=true. Elapsed: 9.552591ms Jun 7 14:25:01.388: INFO: Pod "pod-update-activedeadlineseconds-92f98b92-d538-4eeb-84fc-b8d6a4f5c025": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.013944954s Jun 7 14:25:01.388: INFO: Pod "pod-update-activedeadlineseconds-92f98b92-d538-4eeb-84fc-b8d6a4f5c025" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:25:01.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5458" for this suite. Jun 7 14:25:07.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:25:07.492: INFO: namespace pods-5458 deletion completed in 6.099656317s • [SLOW TEST:12.714 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:25:07.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-bbf80220-3b93-4ea7-ac98-20519c002dee STEP: Creating configMap with name cm-test-opt-upd-c9849068-3c5f-45fa-9a60-44725fa9b3e3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bbf80220-3b93-4ea7-ac98-20519c002dee STEP: Updating configmap cm-test-opt-upd-c9849068-3c5f-45fa-9a60-44725fa9b3e3 STEP: Creating configMap with name cm-test-opt-create-82d36d7a-9ddf-4e2a-8c28-7d39b5a15c5f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:25:15.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9027" for this suite. Jun 7 14:25:37.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:25:37.750: INFO: namespace projected-9027 deletion completed in 22.090684059s • [SLOW TEST:30.258 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:25:37.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 14:25:41.871: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:25:41.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2806" for this suite. Jun 7 14:25:47.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:25:48.012: INFO: namespace container-runtime-2806 deletion completed in 6.122094409s • [SLOW TEST:10.262 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:25:48.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:25:48.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb" in namespace "downward-api-4792" to be "success or failure" Jun 7 14:25:48.184: INFO: Pod "downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.076199ms Jun 7 14:25:50.188: INFO: Pod "downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019466187s Jun 7 14:25:52.192: INFO: Pod "downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023211993s STEP: Saw pod success Jun 7 14:25:52.192: INFO: Pod "downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb" satisfied condition "success or failure" Jun 7 14:25:52.195: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb container client-container: STEP: delete the pod Jun 7 14:25:52.223: INFO: Waiting for pod downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb to disappear Jun 7 14:25:52.256: INFO: Pod downwardapi-volume-d11403b1-23ce-42a4-bd7e-59e25ac44efb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:25:52.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4792" for this suite. Jun 7 14:25:58.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:25:58.368: INFO: namespace downward-api-4792 deletion completed in 6.108255109s • [SLOW TEST:10.355 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:25:58.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2796.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2796.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2796.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2796.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 14:26:04.509: INFO: DNS probes using dns-2796/dns-test-5288de17-d2a5-4f33-91b6-eaeab65c17a3 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:26:04.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2796" for this suite. Jun 7 14:26:10.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:10.670: INFO: namespace dns-2796 deletion completed in 6.122442105s • [SLOW TEST:12.301 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:26:10.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:26:10.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74" in namespace "downward-api-2864" to be "success or failure" Jun 7 14:26:10.769: INFO: Pod "downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74": Phase="Pending", Reason="", readiness=false. Elapsed: 15.139986ms Jun 7 14:26:12.774: INFO: Pod "downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019616968s Jun 7 14:26:14.778: INFO: Pod "downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024322235s STEP: Saw pod success Jun 7 14:26:14.778: INFO: Pod "downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74" satisfied condition "success or failure" Jun 7 14:26:14.782: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74 container client-container: STEP: delete the pod Jun 7 14:26:14.822: INFO: Waiting for pod downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74 to disappear Jun 7 14:26:14.861: INFO: Pod downwardapi-volume-05ba4786-0616-443f-b451-989c3afe0b74 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:26:14.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2864" for this suite. Jun 7 14:26:20.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:20.967: INFO: namespace downward-api-2864 deletion completed in 6.101562446s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:26:20.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cadeef66-5be7-4c15-9713-463f2f626287 STEP: Creating a pod to test consume secrets Jun 7 14:26:21.127: INFO: Waiting up to 5m0s for pod "pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b" in namespace "secrets-9431" to be "success or failure" Jun 7 14:26:21.197: INFO: Pod "pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b": Phase="Pending", Reason="", readiness=false. Elapsed: 70.114879ms Jun 7 14:26:23.202: INFO: Pod "pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074990312s Jun 7 14:26:25.207: INFO: Pod "pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079583445s STEP: Saw pod success Jun 7 14:26:25.207: INFO: Pod "pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b" satisfied condition "success or failure" Jun 7 14:26:25.210: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b container secret-volume-test: STEP: delete the pod Jun 7 14:26:25.234: INFO: Waiting for pod pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b to disappear Jun 7 14:26:25.244: INFO: Pod pod-secrets-6b0740f6-3569-43ff-8e91-3f49ac07578b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:26:25.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9431" for this suite. Jun 7 14:26:31.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:31.440: INFO: namespace secrets-9431 deletion completed in 6.192918015s STEP: Destroying namespace "secret-namespace-2170" for this suite. Jun 7 14:26:37.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:37.535: INFO: namespace secret-namespace-2170 deletion completed in 6.094460049s • [SLOW TEST:16.568 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:26:37.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:26:37.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99" in namespace "projected-1179" to be "success or failure" Jun 7 14:26:37.652: INFO: Pod "downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99": Phase="Pending", Reason="", readiness=false. Elapsed: 32.260528ms Jun 7 14:26:39.656: INFO: Pod "downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036106049s Jun 7 14:26:41.660: INFO: Pod "downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040238229s STEP: Saw pod success Jun 7 14:26:41.660: INFO: Pod "downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99" satisfied condition "success or failure" Jun 7 14:26:41.663: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99 container client-container: STEP: delete the pod Jun 7 14:26:41.687: INFO: Waiting for pod downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99 to disappear Jun 7 14:26:41.697: INFO: Pod downwardapi-volume-b5f60a7a-48f6-4c65-bfc5-14396bdbac99 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:26:41.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1179" for this suite. Jun 7 14:26:47.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:47.898: INFO: namespace projected-1179 deletion completed in 6.173182925s • [SLOW TEST:10.363 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:26:47.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:26:48.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031" in namespace "projected-1088" to be "success or failure" Jun 7 14:26:48.119: INFO: Pod "downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031": Phase="Pending", Reason="", readiness=false. Elapsed: 41.664808ms Jun 7 14:26:50.123: INFO: Pod "downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045652079s Jun 7 14:26:52.128: INFO: Pod "downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050366666s STEP: Saw pod success Jun 7 14:26:52.128: INFO: Pod "downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031" satisfied condition "success or failure" Jun 7 14:26:52.130: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031 container client-container: STEP: delete the pod Jun 7 14:26:52.161: INFO: Waiting for pod downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031 to disappear Jun 7 14:26:52.164: INFO: Pod downwardapi-volume-b4fee948-4520-43c7-862e-30fd5ac23031 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:26:52.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1088" for this suite. Jun 7 14:26:58.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:26:58.256: INFO: namespace projected-1088 deletion completed in 6.088407327s • [SLOW TEST:10.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:26:58.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f541a270-bf05-40a0-a44b-99bb42c2445f STEP: Creating a pod to test consume configMaps Jun 7 14:26:58.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d" in namespace "projected-3372" to be "success or failure" Jun 7 14:26:58.407: INFO: Pod "pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.160649ms Jun 7 14:27:00.411: INFO: Pod "pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014057252s Jun 7 14:27:02.415: INFO: Pod "pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018347045s STEP: Saw pod success Jun 7 14:27:02.416: INFO: Pod "pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d" satisfied condition "success or failure" Jun 7 14:27:02.418: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d container projected-configmap-volume-test: STEP: delete the pod Jun 7 14:27:02.454: INFO: Waiting for pod pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d to disappear Jun 7 14:27:02.480: INFO: Pod pod-projected-configmaps-06472409-eb6d-4eaf-818a-2a93c430d81d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:27:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3372" for this suite. Jun 7 14:27:08.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:27:08.836: INFO: namespace projected-3372 deletion completed in 6.352460926s • [SLOW TEST:10.580 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:27:08.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 7 14:27:08.913: INFO: Waiting up to 5m0s for pod "downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593" in namespace "downward-api-9543" to be "success or failure" Jun 7 14:27:08.923: INFO: Pod "downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593": Phase="Pending", Reason="", readiness=false. Elapsed: 9.750995ms Jun 7 14:27:10.927: INFO: Pod "downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014057514s Jun 7 14:27:12.932: INFO: Pod "downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019066237s STEP: Saw pod success Jun 7 14:27:12.932: INFO: Pod "downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593" satisfied condition "success or failure" Jun 7 14:27:12.936: INFO: Trying to get logs from node iruya-worker pod downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593 container dapi-container: STEP: delete the pod Jun 7 14:27:12.984: INFO: Waiting for pod downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593 to disappear Jun 7 14:27:13.007: INFO: Pod downward-api-c977d022-7b89-4450-91e9-1f39bfb8d593 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:27:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9543" for this suite. Jun 7 14:27:19.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:27:19.106: INFO: namespace downward-api-9543 deletion completed in 6.0958212s • [SLOW TEST:10.270 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:27:19.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 7 14:27:19.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 7 14:27:19.344: INFO: stderr: "" Jun 7 14:27:19.344: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:27:19.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8157" for this suite. Jun 7 14:27:25.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:27:25.477: INFO: namespace kubectl-8157 deletion completed in 6.12824682s • [SLOW TEST:6.371 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:27:25.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 7 14:27:30.094: INFO: Successfully updated pod "labelsupdate87c3987e-dbdd-4719-8620-fe1e66d991a6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:27:34.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2462" for this suite. Jun 7 14:27:56.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:27:56.238: INFO: namespace downward-api-2462 deletion completed in 22.112305787s • [SLOW TEST:30.760 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:27:56.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 14:28:00.442: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4717" for this suite. Jun 7 14:28:06.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:28:06.582: INFO: namespace container-runtime-4717 deletion completed in 6.092273328s • [SLOW TEST:10.343 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:28:06.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-4fc43de8-5181-4ec8-877c-cb7ec6bbd24f STEP: Creating a pod to test consume configMaps Jun 7 14:28:06.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b" in namespace "configmap-1688" to be "success or failure" Jun 7 14:28:06.649: INFO: Pod "pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320786ms Jun 7 14:28:08.653: INFO: Pod "pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008087749s Jun 7 14:28:10.656: INFO: Pod "pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011919829s STEP: Saw pod success Jun 7 14:28:10.656: INFO: Pod "pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b" satisfied condition "success or failure" Jun 7 14:28:10.658: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b container configmap-volume-test: STEP: delete the pod Jun 7 14:28:10.757: INFO: Waiting for pod pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b to disappear Jun 7 14:28:10.776: INFO: Pod pod-configmaps-7667c628-b49a-4e7f-b7fa-c908310fdd6b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:10.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1688" for this suite. Jun 7 14:28:16.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:28:16.893: INFO: namespace configmap-1688 deletion completed in 6.113435782s • [SLOW TEST:10.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:28:16.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 7 14:28:16.942: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 7 14:28:26.001: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:26.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3188" for this suite. Jun 7 14:28:32.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:28:32.146: INFO: namespace pods-3188 deletion completed in 6.130881286s • [SLOW TEST:15.252 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:28:32.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1af6ca21-183f-469c-b4a7-9e1aac6ba2e2 STEP: Creating a pod to test consume secrets Jun 7 14:28:32.261: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a" in namespace "projected-8731" to be "success or failure" Jun 7 14:28:32.267: INFO: Pod "pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.217672ms Jun 7 14:28:34.518: INFO: Pod "pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25658897s Jun 7 14:28:36.528: INFO: Pod "pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.266570309s STEP: Saw pod success Jun 7 14:28:36.528: INFO: Pod "pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a" satisfied condition "success or failure" Jun 7 14:28:36.530: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a container projected-secret-volume-test: STEP: delete the pod Jun 7 14:28:36.555: INFO: Waiting for pod pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a to disappear Jun 7 14:28:36.566: INFO: Pod pod-projected-secrets-15e9446c-27b3-48fb-b9df-09a642f3e48a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:36.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8731" for this suite. Jun 7 14:28:42.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:28:42.661: INFO: namespace projected-8731 deletion completed in 6.088492162s • [SLOW TEST:10.515 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:28:42.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 7 14:28:42.814: INFO: Waiting up to 5m0s for pod "client-containers-abfe22d1-411c-432a-8432-01104b903bef" in namespace "containers-7861" to be "success or failure" Jun 7 14:28:42.830: INFO: Pod "client-containers-abfe22d1-411c-432a-8432-01104b903bef": Phase="Pending", Reason="", readiness=false. Elapsed: 16.156236ms Jun 7 14:28:44.835: INFO: Pod "client-containers-abfe22d1-411c-432a-8432-01104b903bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0212125s Jun 7 14:28:46.839: INFO: Pod "client-containers-abfe22d1-411c-432a-8432-01104b903bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025184175s STEP: Saw pod success Jun 7 14:28:46.839: INFO: Pod "client-containers-abfe22d1-411c-432a-8432-01104b903bef" satisfied condition "success or failure" Jun 7 14:28:46.841: INFO: Trying to get logs from node iruya-worker pod client-containers-abfe22d1-411c-432a-8432-01104b903bef container test-container: STEP: delete the pod Jun 7 14:28:46.873: INFO: Waiting for pod client-containers-abfe22d1-411c-432a-8432-01104b903bef to disappear Jun 7 14:28:46.886: INFO: Pod client-containers-abfe22d1-411c-432a-8432-01104b903bef no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:46.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7861" for this suite. Jun 7 14:28:52.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:28:52.993: INFO: namespace containers-7861 deletion completed in 6.103865723s • [SLOW TEST:10.332 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:28:52.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 7 14:28:53.037: INFO: Waiting up to 5m0s for pod "var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da" in namespace "var-expansion-3969" to be "success or failure" Jun 7 14:28:53.080: INFO: Pod "var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da": Phase="Pending", Reason="", readiness=false. Elapsed: 42.957648ms Jun 7 14:28:55.085: INFO: Pod "var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04774403s Jun 7 14:28:57.090: INFO: Pod "var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052504369s STEP: Saw pod success Jun 7 14:28:57.090: INFO: Pod "var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da" satisfied condition "success or failure" Jun 7 14:28:57.093: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da container dapi-container: STEP: delete the pod Jun 7 14:28:57.115: INFO: Waiting for pod var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da to disappear Jun 7 14:28:57.119: INFO: Pod var-expansion-799bda6c-e2ca-42d9-a764-0ea1d7ab68da no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:28:57.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3969" for this suite. Jun 7 14:29:03.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:29:03.244: INFO: namespace var-expansion-3969 deletion completed in 6.11745756s • [SLOW TEST:10.250 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:29:03.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 7 14:29:03.338: INFO: Waiting up to 5m0s for pod "pod-22316b65-c709-494c-9a0c-3963ac496f81" in namespace "emptydir-2356" to be "success or failure" Jun 7 14:29:03.364: INFO: Pod "pod-22316b65-c709-494c-9a0c-3963ac496f81": Phase="Pending", Reason="", readiness=false. Elapsed: 26.390461ms Jun 7 14:29:05.369: INFO: Pod "pod-22316b65-c709-494c-9a0c-3963ac496f81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031522527s Jun 7 14:29:07.374: INFO: Pod "pod-22316b65-c709-494c-9a0c-3963ac496f81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035651875s STEP: Saw pod success Jun 7 14:29:07.374: INFO: Pod "pod-22316b65-c709-494c-9a0c-3963ac496f81" satisfied condition "success or failure" Jun 7 14:29:07.376: INFO: Trying to get logs from node iruya-worker pod pod-22316b65-c709-494c-9a0c-3963ac496f81 container test-container: STEP: delete the pod Jun 7 14:29:07.421: INFO: Waiting for pod pod-22316b65-c709-494c-9a0c-3963ac496f81 to disappear Jun 7 14:29:07.423: INFO: Pod pod-22316b65-c709-494c-9a0c-3963ac496f81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:29:07.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2356" for this suite. Jun 7 14:29:13.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:29:13.517: INFO: namespace emptydir-2356 deletion completed in 6.091842999s • [SLOW TEST:10.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:29:13.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:29:13.600: INFO: Creating ReplicaSet my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2 Jun 7 14:29:13.614: INFO: Pod name my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2: Found 0 pods out of 1 Jun 7 14:29:18.619: INFO: Pod name my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2: Found 1 pods out of 1 Jun 7 14:29:18.619: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2" is running Jun 7 14:29:18.622: INFO: Pod "my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2-n5558" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:29:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:29:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:29:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:29:13 +0000 UTC Reason: Message:}]) Jun 7 14:29:18.622: INFO: Trying to dial the pod Jun 7 14:29:24.260: INFO: Controller my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2: Got expected result from replica 1 [my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2-n5558]: "my-hostname-basic-94ab5fc5-652c-4ea2-9d42-aada473e3dc2-n5558", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:29:24.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-962" for this suite. Jun 7 14:29:30.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:29:30.416: INFO: namespace replicaset-962 deletion completed in 6.108091456s • [SLOW TEST:16.898 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:29:30.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:29:30.488: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 6.196528ms) Jun 7 14:29:30.512: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 23.651541ms) Jun 7 14:29:30.516: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.82084ms) Jun 7 14:29:30.523: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 7.258875ms) Jun 7 14:29:30.527: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.312107ms) Jun 7 14:29:30.529: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.580162ms) Jun 7 14:29:30.532: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.225549ms) Jun 7 14:29:30.534: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.268458ms) Jun 7 14:29:30.536: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.086994ms) Jun 7 14:29:30.538: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.411123ms) Jun 7 14:29:30.541: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.964327ms) Jun 7 14:29:30.544: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.506932ms) Jun 7 14:29:30.547: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.909386ms) Jun 7 14:29:30.550: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.674109ms) Jun 7 14:29:30.552: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.847001ms) Jun 7 14:29:30.555: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.006308ms) Jun 7 14:29:30.578: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 22.513882ms) Jun 7 14:29:30.586: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 7.504057ms) Jun 7 14:29:30.589: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.628313ms) Jun 7 14:29:30.592: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.66757ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:29:30.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2918" for this suite. Jun 7 14:29:36.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:29:36.689: INFO: namespace proxy-2918 deletion completed in 6.093541908s • [SLOW TEST:6.272 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:29:36.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:29:36.739: INFO: Creating deployment "nginx-deployment" Jun 7 14:29:36.749: INFO: Waiting for observed generation 1 Jun 7 14:29:39.051: INFO: Waiting for all required pods to come up Jun 7 14:29:39.055: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 7 14:29:49.064: INFO: Waiting for deployment "nginx-deployment" to complete Jun 7 14:29:49.073: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 7 14:29:49.078: INFO: Updating deployment nginx-deployment Jun 7 14:29:49.078: INFO: Waiting for observed generation 2 Jun 7 14:29:51.087: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 7 14:29:51.090: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 7 14:29:51.093: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 7 14:29:51.100: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 7 14:29:51.100: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 7 14:29:51.103: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 7 14:29:51.108: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 7 14:29:51.108: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 7 14:29:51.114: INFO: Updating deployment nginx-deployment Jun 7 14:29:51.114: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 7 14:29:51.205: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 7 14:29:51.278: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 7 14:29:51.555: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5661,SelfLink:/apis/apps/v1/namespaces/deployment-5661/deployments/nginx-deployment,UID:0629aa9f-aa9e-454a-a4b9-7056485990d8,ResourceVersion:15167242,Generation:3,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-07 14:29:49 +0000 UTC 2020-06-07 14:29:36 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-06-07 14:29:51 +0000 UTC 2020-06-07 14:29:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 7 14:29:51.620: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5661,SelfLink:/apis/apps/v1/namespaces/deployment-5661/replicasets/nginx-deployment-55fb7cb77f,UID:9fa3f036-9a33-4727-ac88-95b8e2455a2a,ResourceVersion:15167283,Generation:3,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0629aa9f-aa9e-454a-a4b9-7056485990d8 0xc002feae67 0xc002feae68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 14:29:51.620: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 7 14:29:51.620: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5661,SelfLink:/apis/apps/v1/namespaces/deployment-5661/replicasets/nginx-deployment-7b8c6f4498,UID:b8ae3a8e-5c92-46cc-84d0-1dc5824143ce,ResourceVersion:15167277,Generation:3,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0629aa9f-aa9e-454a-a4b9-7056485990d8 0xc002feaf37 0xc002feaf38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 7 14:29:51.738: INFO: Pod "nginx-deployment-55fb7cb77f-2jlq5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2jlq5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-2jlq5,UID:d49460c6-4659-42d0-896f-a350ad35f1a5,ResourceVersion:15167248,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002feb8c7 0xc002feb8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002feb940} {node.kubernetes.io/unreachable Exists NoExecute 0xc002feb960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.738: INFO: Pod "nginx-deployment-55fb7cb77f-5r6k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5r6k6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-5r6k6,UID:b08a2c4d-879e-4a7c-94c1-8bd0319c5022,ResourceVersion:15167264,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002feb9e7 0xc002feb9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002feba60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002feba80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.738: INFO: Pod "nginx-deployment-55fb7cb77f-68b66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-68b66,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-68b66,UID:f54010dd-6004-4c20-80dd-d89e72383b13,ResourceVersion:15167196,Generation:0,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002febb07 0xc002febb08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002febb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002febba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-07 14:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.738: INFO: Pod "nginx-deployment-55fb7cb77f-bdbnr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bdbnr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-bdbnr,UID:ebf98bbf-432d-4ae8-b9e2-acaa3af10df4,ResourceVersion:15167211,Generation:0,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002febc77 0xc002febc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002febcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002febd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-07 14:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.738: INFO: Pod "nginx-deployment-55fb7cb77f-gkxnl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gkxnl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-gkxnl,UID:840f6fa2-cd6b-4f08-9b09-81fee30ab2b4,ResourceVersion:15167265,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002febde7 0xc002febde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002febe60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002febe80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-hbjrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hbjrg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-hbjrg,UID:d055bf80-df46-4798-85f8-ca7b108585cb,ResourceVersion:15167251,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc002febf07 0xc002febf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002febf80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002febfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-jgk69" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jgk69,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-jgk69,UID:fc094886-ab4f-437f-ac09-ef81c47e1d0f,ResourceVersion:15167266,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c027 0xc00385c028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-kn29p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kn29p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-kn29p,UID:29d0f1c6-d470-4532-bc94-3fc1efacc770,ResourceVersion:15167278,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c147 0xc00385c148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-mxx6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mxx6v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-mxx6v,UID:4a12c4b3-67cf-45a5-9770-20dcaa7a4988,ResourceVersion:15167188,Generation:0,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c267 0xc00385c268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-07 14:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-pc6vq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pc6vq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-pc6vq,UID:51fddd1a-f7b5-48ef-9ab5-1fb2bec4e710,ResourceVersion:15167185,Generation:0,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c3d7 0xc00385c3d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c450} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-07 14:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.739: INFO: Pod "nginx-deployment-55fb7cb77f-vrh6d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vrh6d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-vrh6d,UID:25a0fa03-5b37-4c34-ac40-e83f3bd15d7b,ResourceVersion:15167273,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c547 0xc00385c548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-55fb7cb77f-whjnl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-whjnl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-whjnl,UID:0273396e-b69f-4286-8ed8-c8b32f7d777b,ResourceVersion:15167288,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c667 0xc00385c668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-07 14:29:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-55fb7cb77f-z6r4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z6r4m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-55fb7cb77f-z6r4m,UID:b80ece4d-5544-4830-80d2-28655415bd26,ResourceVersion:15167213,Generation:0,CreationTimestamp:2020-06-07 14:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9fa3f036-9a33-4727-ac88-95b8e2455a2a 0xc00385c7e7 0xc00385c7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-07 14:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-4gmf9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4gmf9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-4gmf9,UID:d9f196c2-bb74-4c20-bc4c-3ad66ecefa54,ResourceVersion:15167267,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385c957 0xc00385c958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385c9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385c9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-6pbwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6pbwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-6pbwz,UID:e1f51469-d845-4ff0-847b-d1eb7945cde4,ResourceVersion:15167259,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385ca77 0xc00385ca78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385caf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385cb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-75tbz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75tbz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-75tbz,UID:de735297-6a92-4925-bab3-b1561dfe1a48,ResourceVersion:15167133,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385cb97 0xc00385cb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385cc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385cc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.253,StartTime:2020-06-07 14:29:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c2f7666a2d5f90c803fd52808cda2bd502df919692a27b745776c24a08d36b66}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-9g6pj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9g6pj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-9g6pj,UID:6af56e24-8a5b-4725-94ee-e3c894d41aa5,ResourceVersion:15167258,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385cd07 0xc00385cd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385cd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385cda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-cvgcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cvgcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-cvgcp,UID:c279fe1d-7e40-4a2e-b5c4-669c1409f10c,ResourceVersion:15167254,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385ce27 0xc00385ce28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385cea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385cec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.740: INFO: Pod "nginx-deployment-7b8c6f4498-d5c62" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d5c62,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-d5c62,UID:50c597b9-6f39-4cad-a278-34157ca4c188,ResourceVersion:15167260,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385cf47 0xc00385cf48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385cfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385cfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-fsjrn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fsjrn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-fsjrn,UID:f8fa6995-bd8e-4ed2-8620-b6102f8c683e,ResourceVersion:15167268,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d067 0xc00385d068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-hw27j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hw27j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-hw27j,UID:7c83c75a-b24f-4e18-ba0a-ba894f37ddbd,ResourceVersion:15167270,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d187 0xc00385d188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d200} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-mv7mr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mv7mr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-mv7mr,UID:fbcd3369-bb0c-4072-9286-834cc5a317d4,ResourceVersion:15167275,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d2a7 0xc00385d2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d320} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-07 14:29:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-nhhf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nhhf8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-nhhf8,UID:b605a437-70ba-46a3-acf7-17cdc0b7c0a4,ResourceVersion:15167284,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d407 0xc00385d408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d480} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-07 14:29:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-nscck" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nscck,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-nscck,UID:3b34b50c-67fb-4837-a8d5-2355ce8e38e3,ResourceVersion:15167127,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d567 0xc00385d568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.250,StartTime:2020-06-07 14:29:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e0360a84d7d8866b881e9e9a3b830d1a065aa0c8ca4029dced5bd013ab4d07f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.741: INFO: Pod "nginx-deployment-7b8c6f4498-nt676" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nt676,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-nt676,UID:fffd220a-b7ef-4556-b98d-f71c0f9d33e0,ResourceVersion:15167263,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d6d7 0xc00385d6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d750} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-ntmlv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ntmlv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-ntmlv,UID:11f1d4aa-f026-4049-88ff-edf430cbab33,ResourceVersion:15167112,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d7f7 0xc00385d7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d870} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385d890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.248,StartTime:2020-06-07 14:29:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://52513aa50884f1115e2381668076afaaa05fbe2a5f77d02e83be0363062644d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-rrlq7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rrlq7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-rrlq7,UID:f52737b2-ca59-4177-9359-08c8589d4218,ResourceVersion:15167105,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385d967 0xc00385d968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385d9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385da00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.247,StartTime:2020-06-07 14:29:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b6e1220fa39c8205863e768b91ba5949ec446f9e217a5a78c9c5fa35c18b16a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-tlr5s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tlr5s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-tlr5s,UID:34c4cfb6-43f6-425f-8e60-4889118a98ed,ResourceVersion:15167117,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385dad7 0xc00385dad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385db50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385db70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.249,StartTime:2020-06-07 14:29:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://04a0ef98764740f8563db1039b4fd807168f81ece6726a968bd7a34cc42f178e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-trbdr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-trbdr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-trbdr,UID:114c30d1-7009-4f5b-a2ec-4fcb9295cffd,ResourceVersion:15167269,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385dc47 0xc00385dc48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385dcc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385dce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-vb4tj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vb4tj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-vb4tj,UID:48c68479-ede4-41ea-8b2a-b716a75cd103,ResourceVersion:15167239,Generation:0,CreationTimestamp:2020-06-07 14:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385dd67 0xc00385dd68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385dde0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385de00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-w6mq5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w6mq5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-w6mq5,UID:36029e8b-ea27-4b92-884b-dea4ad195a28,ResourceVersion:15167145,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385de87 0xc00385de88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00385df00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00385df20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.5,StartTime:2020-06-07 14:29:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://06063db65d0842ca9571aa91ce53d2e84e7611d4bbfc317aa8a0a508150b2bd2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.742: INFO: Pod "nginx-deployment-7b8c6f4498-wc5j4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wc5j4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-wc5j4,UID:a3ae2f26-25bc-4d6f-ac7b-7f86734cd8e5,ResourceVersion:15167155,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc00385dff7 0xc00385dff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0037b6070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0037b6090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.4,StartTime:2020-06-07 14:29:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ea174b15a8419d3ad7b08631d7592b39e03c910861f1d5d238ecd8ae375d2450}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 7 14:29:51.743: INFO: Pod "nginx-deployment-7b8c6f4498-zfngk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zfngk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5661,SelfLink:/api/v1/namespaces/deployment-5661/pods/nginx-deployment-7b8c6f4498-zfngk,UID:ad01e5ef-6ef3-47af-977b-6a3f55108b92,ResourceVersion:15167141,Generation:0,CreationTimestamp:2020-06-07 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b8ae3a8e-5c92-46cc-84d0-1dc5824143ce 0xc0037b6167 0xc0037b6168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wwmzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwmzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wwmzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0037b61e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0037b6200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:29:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.254,StartTime:2020-06-07 14:29:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 14:29:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7ef1e5e2e7c492fac409ef3846f4263df4c5f7c2699cc46cd389b3eabe4a111b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:29:51.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5661" for this suite. Jun 7 14:30:15.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:30:16.048: INFO: namespace deployment-5661 deletion completed in 24.20837614s • [SLOW TEST:39.359 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:30:16.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-872a15b2-bcf9-4024-8f70-376fccf4bd3c STEP: Creating a pod to test consume secrets Jun 7 14:30:16.477: INFO: Waiting up to 5m0s for pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77" in namespace "secrets-3589" to be "success or failure" Jun 7 14:30:16.498: INFO: Pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77": Phase="Pending", Reason="", readiness=false. Elapsed: 21.204399ms Jun 7 14:30:18.502: INFO: Pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025380493s Jun 7 14:30:20.554: INFO: Pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077451497s Jun 7 14:30:22.558: INFO: Pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081318586s STEP: Saw pod success Jun 7 14:30:22.558: INFO: Pod "pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77" satisfied condition "success or failure" Jun 7 14:30:22.561: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77 container secret-volume-test: STEP: delete the pod Jun 7 14:30:22.576: INFO: Waiting for pod pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77 to disappear Jun 7 14:30:22.620: INFO: Pod pod-secrets-8312a398-bd8f-4a8a-afcd-4b5fb0f52c77 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:30:22.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3589" for this suite. Jun 7 14:30:28.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:30:28.774: INFO: namespace secrets-3589 deletion completed in 6.151221866s • [SLOW TEST:12.725 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:30:28.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0607 14:30:38.852439 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 14:30:38.852: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:30:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-397" for this suite. Jun 7 14:30:44.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:30:44.981: INFO: namespace gc-397 deletion completed in 6.125395159s • [SLOW TEST:16.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:30:44.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 7 14:30:53.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:30:53.156: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 14:30:55.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:30:55.190: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 14:30:57.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:30:57.161: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 14:30:59.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:30:59.172: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 14:31:01.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:31:01.208: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 14:31:03.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 14:31:03.160: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:31:03.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9155" for this suite. Jun 7 14:31:25.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:31:25.268: INFO: namespace container-lifecycle-hook-9155 deletion completed in 22.103155078s • [SLOW TEST:40.287 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:31:25.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f7e53378-06a9-4cb5-a366-74dfc7c8ee2d STEP: Creating a pod to test consume secrets Jun 7 14:31:25.369: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226" in namespace "projected-7268" to be "success or failure" Jun 7 14:31:25.395: INFO: Pod "pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226": Phase="Pending", Reason="", readiness=false. Elapsed: 25.866932ms Jun 7 14:31:27.398: INFO: Pod "pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029099943s Jun 7 14:31:29.403: INFO: Pod "pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033215254s STEP: Saw pod success Jun 7 14:31:29.403: INFO: Pod "pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226" satisfied condition "success or failure" Jun 7 14:31:29.405: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226 container projected-secret-volume-test: STEP: delete the pod Jun 7 14:31:29.459: INFO: Waiting for pod pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226 to disappear Jun 7 14:31:29.476: INFO: Pod pod-projected-secrets-49b1eb68-c9ab-4cf8-bb26-527c65421226 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:31:29.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7268" for this suite. Jun 7 14:31:35.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:31:35.624: INFO: namespace projected-7268 deletion completed in 6.144660232s • [SLOW TEST:10.355 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:31:35.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:32:03.697: INFO: Container started at 2020-06-07 14:31:38 +0000 UTC, pod became ready at 2020-06-07 14:32:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:32:03.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1122" for this suite. Jun 7 14:32:25.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:32:25.799: INFO: namespace container-probe-1122 deletion completed in 22.096001033s • [SLOW TEST:50.174 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:32:25.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 7 14:32:29.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-bcdaf1b1-54d9-40ef-ad6d-4442ececcc4d -c busybox-main-container --namespace=emptydir-142 -- cat /usr/share/volumeshare/shareddata.txt' Jun 7 14:32:32.778: INFO: stderr: "I0607 14:32:32.675293 3392 log.go:172] (0xc000c2a370) (0xc0006dea00) Create stream\nI0607 14:32:32.675355 3392 log.go:172] (0xc000c2a370) (0xc0006dea00) Stream added, broadcasting: 1\nI0607 14:32:32.677577 3392 log.go:172] (0xc000c2a370) Reply frame received for 1\nI0607 14:32:32.677630 3392 log.go:172] (0xc000c2a370) (0xc0003c6000) Create stream\nI0607 14:32:32.677640 3392 log.go:172] (0xc000c2a370) (0xc0003c6000) Stream added, broadcasting: 3\nI0607 14:32:32.678581 3392 log.go:172] (0xc000c2a370) Reply frame received for 3\nI0607 14:32:32.678630 3392 log.go:172] (0xc000c2a370) (0xc0003ca000) Create stream\nI0607 14:32:32.678645 3392 log.go:172] (0xc000c2a370) (0xc0003ca000) Stream added, broadcasting: 5\nI0607 14:32:32.679820 3392 log.go:172] (0xc000c2a370) Reply frame received for 5\nI0607 14:32:32.767734 3392 log.go:172] (0xc000c2a370) Data frame received for 5\nI0607 14:32:32.767773 3392 log.go:172] (0xc0003ca000) (5) Data frame handling\nI0607 14:32:32.767795 3392 log.go:172] (0xc000c2a370) Data frame received for 3\nI0607 14:32:32.767803 3392 log.go:172] (0xc0003c6000) (3) Data frame handling\nI0607 14:32:32.767813 3392 log.go:172] (0xc0003c6000) (3) Data frame sent\nI0607 14:32:32.767820 3392 log.go:172] (0xc000c2a370) Data frame received for 3\nI0607 14:32:32.767827 3392 log.go:172] (0xc0003c6000) (3) Data frame handling\nI0607 14:32:32.769083 3392 log.go:172] (0xc000c2a370) Data frame received for 1\nI0607 14:32:32.769100 3392 log.go:172] (0xc0006dea00) (1) Data frame handling\nI0607 14:32:32.769274 3392 log.go:172] (0xc0006dea00) (1) Data frame sent\nI0607 14:32:32.769296 3392 log.go:172] (0xc000c2a370) (0xc0006dea00) Stream removed, broadcasting: 1\nI0607 14:32:32.769357 3392 log.go:172] (0xc000c2a370) Go away received\nI0607 14:32:32.769617 3392 log.go:172] (0xc000c2a370) (0xc0006dea00) Stream removed, broadcasting: 1\nI0607 14:32:32.769635 3392 log.go:172] (0xc000c2a370) (0xc0003c6000) Stream removed, broadcasting: 3\nI0607 14:32:32.769644 3392 log.go:172] (0xc000c2a370) (0xc0003ca000) Stream removed, broadcasting: 5\n" Jun 7 14:32:32.778: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:32:32.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-142" for this suite. Jun 7 14:32:38.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:32:38.876: INFO: namespace emptydir-142 deletion completed in 6.094504644s • [SLOW TEST:13.077 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:32:38.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 7 14:32:38.955: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 14:32:39.003: INFO: Waiting for terminating namespaces to be deleted... Jun 7 14:32:39.006: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 7 14:32:39.011: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.011: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 14:32:39.011: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.011: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 14:32:39.011: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 7 14:32:39.019: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.019: INFO: Container coredns ready: true, restart count 0 Jun 7 14:32:39.019: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.019: INFO: Container coredns ready: true, restart count 0 Jun 7 14:32:39.019: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.019: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 14:32:39.019: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 7 14:32:39.019: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161649863207b5cb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:32:40.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1550" for this suite. Jun 7 14:32:46.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:32:46.223: INFO: namespace sched-pred-1550 deletion completed in 6.180311405s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.347 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:32:46.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 7 14:32:50.881: INFO: Successfully updated pod "annotationupdatea5ffe7e7-08cc-49f2-a9e0-882efbdc09ce" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:32:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6198" for this suite. Jun 7 14:33:14.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:33:15.005: INFO: namespace downward-api-6198 deletion completed in 22.090475197s • [SLOW TEST:28.781 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:33:15.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:33:19.120: INFO: Waiting up to 5m0s for pod "client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65" in namespace "pods-7351" to be "success or failure" Jun 7 14:33:19.134: INFO: Pod "client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65": Phase="Pending", Reason="", readiness=false. Elapsed: 13.749444ms Jun 7 14:33:21.139: INFO: Pod "client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018586616s Jun 7 14:33:23.143: INFO: Pod "client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022888703s STEP: Saw pod success Jun 7 14:33:23.143: INFO: Pod "client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65" satisfied condition "success or failure" Jun 7 14:33:23.146: INFO: Trying to get logs from node iruya-worker pod client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65 container env3cont: STEP: delete the pod Jun 7 14:33:23.296: INFO: Waiting for pod client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65 to disappear Jun 7 14:33:23.305: INFO: Pod client-envvars-d503338b-e5ee-47ba-8df0-16930c03ab65 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:33:23.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7351" for this suite. Jun 7 14:34:13.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:34:13.422: INFO: namespace pods-7351 deletion completed in 50.114185502s • [SLOW TEST:58.417 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:34:13.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-607a5980-98b0-4e53-bbc2-010b37c307f2 STEP: Creating a pod to test consume configMaps Jun 7 14:34:13.552: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad" in namespace "projected-3429" to be "success or failure" Jun 7 14:34:13.581: INFO: Pod "pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 28.98251ms Jun 7 14:34:15.586: INFO: Pod "pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033439766s Jun 7 14:34:17.589: INFO: Pod "pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036467661s STEP: Saw pod success Jun 7 14:34:17.589: INFO: Pod "pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad" satisfied condition "success or failure" Jun 7 14:34:17.611: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad container projected-configmap-volume-test: STEP: delete the pod Jun 7 14:34:17.642: INFO: Waiting for pod pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad to disappear Jun 7 14:34:17.646: INFO: Pod pod-projected-configmaps-5ec592d3-09b3-44d9-88a0-34e15a7cf4ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:34:17.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3429" for this suite. Jun 7 14:34:23.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:34:23.751: INFO: namespace projected-3429 deletion completed in 6.101015476s • [SLOW TEST:10.328 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:34:23.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 7 14:34:23.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:23.864: INFO: Number of nodes with available pods: 0 Jun 7 14:34:23.864: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:34:24.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:24.871: INFO: Number of nodes with available pods: 0 Jun 7 14:34:24.871: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:34:25.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:25.872: INFO: Number of nodes with available pods: 0 Jun 7 14:34:25.872: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:34:26.893: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:26.995: INFO: Number of nodes with available pods: 0 Jun 7 14:34:26.995: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:34:27.869: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:27.872: INFO: Number of nodes with available pods: 0 Jun 7 14:34:27.872: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:34:28.869: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:28.874: INFO: Number of nodes with available pods: 2 Jun 7 14:34:28.874: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 7 14:34:28.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:28.947: INFO: Number of nodes with available pods: 1 Jun 7 14:34:28.947: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:29.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:29.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:29.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:30.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:30.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:30.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:31.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:31.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:31.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:32.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:32.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:32.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:33.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:33.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:33.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:34.951: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:34.954: INFO: Number of nodes with available pods: 1 Jun 7 14:34:34.954: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:35.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:35.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:35.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:36.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:36.955: INFO: Number of nodes with available pods: 1 Jun 7 14:34:36.955: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:37.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:37.956: INFO: Number of nodes with available pods: 1 Jun 7 14:34:37.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:38.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:38.975: INFO: Number of nodes with available pods: 1 Jun 7 14:34:38.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:39.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:39.955: INFO: Number of nodes with available pods: 1 Jun 7 14:34:39.955: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:40.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:40.957: INFO: Number of nodes with available pods: 1 Jun 7 14:34:40.957: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:41.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:41.968: INFO: Number of nodes with available pods: 1 Jun 7 14:34:41.968: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:42.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:42.955: INFO: Number of nodes with available pods: 1 Jun 7 14:34:42.955: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:43.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:43.955: INFO: Number of nodes with available pods: 1 Jun 7 14:34:43.956: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:44.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:44.957: INFO: Number of nodes with available pods: 1 Jun 7 14:34:44.957: INFO: Node iruya-worker2 is running more than one daemon pod Jun 7 14:34:45.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:34:45.955: INFO: Number of nodes with available pods: 2 Jun 7 14:34:45.955: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9513, will wait for the garbage collector to delete the pods Jun 7 14:34:46.017: INFO: Deleting DaemonSet.extensions daemon-set took: 7.33246ms Jun 7 14:34:46.317: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.28954ms Jun 7 14:34:51.920: INFO: Number of nodes with available pods: 0 Jun 7 14:34:51.920: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 14:34:51.923: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9513/daemonsets","resourceVersion":"15168543"},"items":null} Jun 7 14:34:51.925: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9513/pods","resourceVersion":"15168543"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:34:51.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9513" for this suite. Jun 7 14:34:57.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:34:58.052: INFO: namespace daemonsets-9513 deletion completed in 6.113671159s • [SLOW TEST:34.301 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:34:58.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:34:58.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922" in namespace "projected-898" to be "success or failure" Jun 7 14:34:58.128: INFO: Pod "downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922": Phase="Pending", Reason="", readiness=false. Elapsed: 17.885515ms Jun 7 14:35:00.133: INFO: Pod "downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022425074s Jun 7 14:35:02.138: INFO: Pod "downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027797706s STEP: Saw pod success Jun 7 14:35:02.138: INFO: Pod "downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922" satisfied condition "success or failure" Jun 7 14:35:02.141: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922 container client-container: STEP: delete the pod Jun 7 14:35:02.177: INFO: Waiting for pod downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922 to disappear Jun 7 14:35:02.186: INFO: Pod downwardapi-volume-75d7f4a0-ce42-4b71-bb2a-ed61fe4b6922 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:35:02.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-898" for this suite. Jun 7 14:35:08.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:35:08.280: INFO: namespace projected-898 deletion completed in 6.091122838s • [SLOW TEST:10.227 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:35:08.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-365f2f65-fd2c-4d57-bf83-1923225ada5a STEP: Creating secret with name secret-projected-all-test-volume-94cae0b0-e92c-4553-831a-d79cbcc7dcaa STEP: Creating a pod to test Check all projections for projected volume plugin Jun 7 14:35:08.387: INFO: Waiting up to 5m0s for pod "projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936" in namespace "projected-1283" to be "success or failure" Jun 7 14:35:08.391: INFO: Pod "projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936": Phase="Pending", Reason="", readiness=false. Elapsed: 3.198013ms Jun 7 14:35:10.395: INFO: Pod "projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007811233s Jun 7 14:35:12.400: INFO: Pod "projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012685782s STEP: Saw pod success Jun 7 14:35:12.400: INFO: Pod "projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936" satisfied condition "success or failure" Jun 7 14:35:12.403: INFO: Trying to get logs from node iruya-worker pod projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936 container projected-all-volume-test: STEP: delete the pod Jun 7 14:35:12.482: INFO: Waiting for pod projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936 to disappear Jun 7 14:35:12.498: INFO: Pod projected-volume-7d1abe8f-a087-44d1-89b7-0a256c09a936 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:35:12.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1283" for this suite. Jun 7 14:35:18.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:35:18.634: INFO: namespace projected-1283 deletion completed in 6.131263392s • [SLOW TEST:10.353 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:35:18.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 7 14:35:18.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9689' Jun 7 14:35:18.998: INFO: stderr: "" Jun 7 14:35:18.998: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 14:35:18.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9689' Jun 7 14:35:19.112: INFO: stderr: "" Jun 7 14:35:19.112: INFO: stdout: "update-demo-nautilus-k7wsg update-demo-nautilus-tqlw7 " Jun 7 14:35:19.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7wsg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9689' Jun 7 14:35:19.220: INFO: stderr: "" Jun 7 14:35:19.220: INFO: stdout: "" Jun 7 14:35:19.220: INFO: update-demo-nautilus-k7wsg is created but not running Jun 7 14:35:24.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9689' Jun 7 14:35:24.328: INFO: stderr: "" Jun 7 14:35:24.328: INFO: stdout: "update-demo-nautilus-k7wsg update-demo-nautilus-tqlw7 " Jun 7 14:35:24.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7wsg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9689' Jun 7 14:35:24.416: INFO: stderr: "" Jun 7 14:35:24.416: INFO: stdout: "true" Jun 7 14:35:24.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7wsg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9689' Jun 7 14:35:24.510: INFO: stderr: "" Jun 7 14:35:24.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 14:35:24.510: INFO: validating pod update-demo-nautilus-k7wsg Jun 7 14:35:24.519: INFO: got data: { "image": "nautilus.jpg" } Jun 7 14:35:24.519: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 14:35:24.519: INFO: update-demo-nautilus-k7wsg is verified up and running Jun 7 14:35:24.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqlw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9689' Jun 7 14:35:24.611: INFO: stderr: "" Jun 7 14:35:24.611: INFO: stdout: "true" Jun 7 14:35:24.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqlw7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9689' Jun 7 14:35:24.713: INFO: stderr: "" Jun 7 14:35:24.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 14:35:24.713: INFO: validating pod update-demo-nautilus-tqlw7 Jun 7 14:35:24.719: INFO: got data: { "image": "nautilus.jpg" } Jun 7 14:35:24.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 14:35:24.719: INFO: update-demo-nautilus-tqlw7 is verified up and running STEP: using delete to clean up resources Jun 7 14:35:24.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9689' Jun 7 14:35:24.815: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 14:35:24.815: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 7 14:35:24.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9689' Jun 7 14:35:24.928: INFO: stderr: "No resources found.\n" Jun 7 14:35:24.928: INFO: stdout: "" Jun 7 14:35:24.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9689 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 14:35:25.055: INFO: stderr: "" Jun 7 14:35:25.055: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:35:25.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9689" for this suite. Jun 7 14:35:47.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:35:47.176: INFO: namespace kubectl-9689 deletion completed in 22.116485038s • [SLOW TEST:28.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:35:47.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:35:47.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087" in namespace "projected-7030" to be "success or failure" Jun 7 14:35:47.256: INFO: Pod "downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087": Phase="Pending", Reason="", readiness=false. Elapsed: 12.531655ms Jun 7 14:35:49.260: INFO: Pod "downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016767496s Jun 7 14:35:51.264: INFO: Pod "downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020456529s STEP: Saw pod success Jun 7 14:35:51.264: INFO: Pod "downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087" satisfied condition "success or failure" Jun 7 14:35:51.266: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087 container client-container: STEP: delete the pod Jun 7 14:35:51.300: INFO: Waiting for pod downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087 to disappear Jun 7 14:35:51.331: INFO: Pod downwardapi-volume-e84a49e3-9df1-4470-a24d-3ec2a69ad087 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:35:51.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7030" for this suite. Jun 7 14:35:57.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:35:57.573: INFO: namespace projected-7030 deletion completed in 6.238689691s • [SLOW TEST:10.397 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:35:57.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 7 14:35:58.193: INFO: created pod pod-service-account-defaultsa Jun 7 14:35:58.193: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 7 14:35:58.200: INFO: created pod pod-service-account-mountsa Jun 7 14:35:58.200: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 7 14:35:58.231: INFO: created pod pod-service-account-nomountsa Jun 7 14:35:58.231: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 7 14:35:58.255: INFO: created pod pod-service-account-defaultsa-mountspec Jun 7 14:35:58.255: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 7 14:35:58.327: INFO: created pod pod-service-account-mountsa-mountspec Jun 7 14:35:58.327: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 7 14:35:58.392: INFO: created pod pod-service-account-nomountsa-mountspec Jun 7 14:35:58.392: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 7 14:35:58.412: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 7 14:35:58.412: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 7 14:35:58.456: INFO: created pod pod-service-account-mountsa-nomountspec Jun 7 14:35:58.456: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 7 14:35:58.543: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 7 14:35:58.543: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:35:58.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8552" for this suite. Jun 7 14:36:26.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:36:26.752: INFO: namespace svcaccounts-8552 deletion completed in 28.125897176s • [SLOW TEST:29.178 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:36:26.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 7 14:36:26.814: INFO: Waiting up to 5m0s for pod "pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa" in namespace "emptydir-5450" to be "success or failure" Jun 7 14:36:26.835: INFO: Pod "pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa": Phase="Pending", Reason="", readiness=false. Elapsed: 20.664393ms Jun 7 14:36:28.839: INFO: Pod "pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025170904s Jun 7 14:36:30.886: INFO: Pod "pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072265416s STEP: Saw pod success Jun 7 14:36:30.886: INFO: Pod "pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa" satisfied condition "success or failure" Jun 7 14:36:30.888: INFO: Trying to get logs from node iruya-worker pod pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa container test-container: STEP: delete the pod Jun 7 14:36:30.904: INFO: Waiting for pod pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa to disappear Jun 7 14:36:30.916: INFO: Pod pod-079c8b36-e6cb-41f1-80c2-1cb7bba398fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:36:30.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5450" for this suite. Jun 7 14:36:36.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:36:37.004: INFO: namespace emptydir-5450 deletion completed in 6.084552665s • [SLOW TEST:10.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:36:37.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:36:37.082: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 7 14:36:37.134: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 7 14:36:42.138: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 14:36:42.138: INFO: Creating deployment "test-rolling-update-deployment" Jun 7 14:36:42.142: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 7 14:36:42.149: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 7 14:36:44.156: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 7 14:36:44.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137402, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137402, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137402, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137402, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 14:36:46.163: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 7 14:36:46.171: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6711,SelfLink:/apis/apps/v1/namespaces/deployment-6711/deployments/test-rolling-update-deployment,UID:de385a53-d4bf-4838-8878-e54a251db1b1,ResourceVersion:15169079,Generation:1,CreationTimestamp:2020-06-07 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-07 14:36:42 +0000 UTC 2020-06-07 14:36:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-07 14:36:45 +0000 UTC 2020-06-07 14:36:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 7 14:36:46.174: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6711,SelfLink:/apis/apps/v1/namespaces/deployment-6711/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:7e9daed6-012c-49a5-958a-08b5bf9b0062,ResourceVersion:15169068,Generation:1,CreationTimestamp:2020-06-07 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de385a53-d4bf-4838-8878-e54a251db1b1 0xc0017b6f67 0xc0017b6f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 7 14:36:46.174: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 7 14:36:46.174: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6711,SelfLink:/apis/apps/v1/namespaces/deployment-6711/replicasets/test-rolling-update-controller,UID:cd9be9c0-2b42-4703-97f4-651ae5ad08c6,ResourceVersion:15169078,Generation:2,CreationTimestamp:2020-06-07 14:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de385a53-d4bf-4838-8878-e54a251db1b1 0xc0017b6e97 0xc0017b6e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 7 14:36:46.177: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-zk7bx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-zk7bx,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6711,SelfLink:/api/v1/namespaces/deployment-6711/pods/test-rolling-update-deployment-79f6b9d75c-zk7bx,UID:b0945686-4bd4-422d-8f22-12964ee82ee6,ResourceVersion:15169067,Generation:0,CreationTimestamp:2020-06-07 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 7e9daed6-012c-49a5-958a-08b5bf9b0062 0xc002d10707 0xc002d10708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4c29w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4c29w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4c29w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d10780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d107a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:36:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:36:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:36:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 14:36:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.34,StartTime:2020-06-07 14:36:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-07 14:36:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://034b5a56dc4381a324165c920e5e800273b8847c616d9d2a707a3a6031fc9804}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:36:46.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6711" for this suite. Jun 7 14:36:52.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:36:52.414: INFO: namespace deployment-6711 deletion completed in 6.233811772s • [SLOW TEST:15.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:36:52.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-51527429-23b8-4b72-837d-42b4760febbc STEP: Creating a pod to test consume configMaps Jun 7 14:36:52.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3" in namespace "configmap-9923" to be "success or failure" Jun 7 14:36:52.513: INFO: Pod "pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396252ms Jun 7 14:36:54.517: INFO: Pod "pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010928304s Jun 7 14:36:56.521: INFO: Pod "pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014901357s STEP: Saw pod success Jun 7 14:36:56.521: INFO: Pod "pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3" satisfied condition "success or failure" Jun 7 14:36:56.524: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3 container configmap-volume-test: STEP: delete the pod Jun 7 14:36:56.694: INFO: Waiting for pod pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3 to disappear Jun 7 14:36:56.729: INFO: Pod pod-configmaps-1355fc0c-d7ed-4a24-8e42-070379756ca3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:36:56.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9923" for this suite. Jun 7 14:37:02.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:37:02.845: INFO: namespace configmap-9923 deletion completed in 6.1117872s • [SLOW TEST:10.430 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:37:02.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7 Jun 7 14:37:02.944: INFO: Pod name my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7: Found 0 pods out of 1 Jun 7 14:37:07.955: INFO: Pod name my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7: Found 1 pods out of 1 Jun 7 14:37:07.955: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7" are running Jun 7 14:37:07.958: INFO: Pod "my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7-66cfd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:37:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:37:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:37:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 14:37:02 +0000 UTC Reason: Message:}]) Jun 7 14:37:07.958: INFO: Trying to dial the pod Jun 7 14:37:12.967: INFO: Controller my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7: Got expected result from replica 1 [my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7-66cfd]: "my-hostname-basic-3fbb720e-7f2a-4242-927d-3c36651229c7-66cfd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:37:12.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4308" for this suite. Jun 7 14:37:18.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:37:19.061: INFO: namespace replication-controller-4308 deletion completed in 6.090761356s • [SLOW TEST:16.216 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:37:19.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 7 14:37:19.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3301' Jun 7 14:37:19.239: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 14:37:19.239: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 7 14:37:19.298: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n64tf] Jun 7 14:37:19.298: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n64tf" in namespace "kubectl-3301" to be "running and ready" Jun 7 14:37:19.300: INFO: Pod "e2e-test-nginx-rc-n64tf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329385ms Jun 7 14:37:21.369: INFO: Pod "e2e-test-nginx-rc-n64tf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070433539s Jun 7 14:37:23.373: INFO: Pod "e2e-test-nginx-rc-n64tf": Phase="Running", Reason="", readiness=true. Elapsed: 4.074902869s Jun 7 14:37:23.373: INFO: Pod "e2e-test-nginx-rc-n64tf" satisfied condition "running and ready" Jun 7 14:37:23.373: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n64tf] Jun 7 14:37:23.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3301' Jun 7 14:37:23.618: INFO: stderr: "" Jun 7 14:37:23.618: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 7 14:37:23.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3301' Jun 7 14:37:23.733: INFO: stderr: "" Jun 7 14:37:23.733: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:37:23.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3301" for this suite. Jun 7 14:37:45.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:37:45.827: INFO: namespace kubectl-3301 deletion completed in 22.09003578s • [SLOW TEST:26.765 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:37:45.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1875, will wait for the garbage collector to delete the pods Jun 7 14:37:51.974: INFO: Deleting Job.batch foo took: 7.450168ms Jun 7 14:37:52.274: INFO: Terminating Job.batch foo pods took: 300.260205ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:38:32.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1875" for this suite. Jun 7 14:38:38.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:38:38.290: INFO: namespace job-1875 deletion completed in 6.100167622s • [SLOW TEST:52.462 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:38:38.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 7 14:38:38.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7151,SelfLink:/api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-watch-closed,UID:235b804f-faa7-44cd-958f-5e61528034b8,ResourceVersion:15169467,Generation:0,CreationTimestamp:2020-06-07 14:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 14:38:38.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7151,SelfLink:/api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-watch-closed,UID:235b804f-faa7-44cd-958f-5e61528034b8,ResourceVersion:15169468,Generation:0,CreationTimestamp:2020-06-07 14:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 7 14:38:38.412: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7151,SelfLink:/api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-watch-closed,UID:235b804f-faa7-44cd-958f-5e61528034b8,ResourceVersion:15169469,Generation:0,CreationTimestamp:2020-06-07 14:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 14:38:38.412: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7151,SelfLink:/api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-watch-closed,UID:235b804f-faa7-44cd-958f-5e61528034b8,ResourceVersion:15169470,Generation:0,CreationTimestamp:2020-06-07 14:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:38:38.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7151" for this suite. Jun 7 14:38:44.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:38:44.507: INFO: namespace watch-7151 deletion completed in 6.092071076s • [SLOW TEST:6.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:38:44.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 7 14:38:44.566: INFO: Waiting up to 5m0s for pod "downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b" in namespace "downward-api-9139" to be "success or failure" Jun 7 14:38:44.584: INFO: Pod "downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209757ms Jun 7 14:38:46.628: INFO: Pod "downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061946696s Jun 7 14:38:48.632: INFO: Pod "downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066765663s STEP: Saw pod success Jun 7 14:38:48.633: INFO: Pod "downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b" satisfied condition "success or failure" Jun 7 14:38:48.636: INFO: Trying to get logs from node iruya-worker2 pod downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b container dapi-container: STEP: delete the pod Jun 7 14:38:48.661: INFO: Waiting for pod downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b to disappear Jun 7 14:38:48.666: INFO: Pod downward-api-2807f0e9-ae6a-4ec5-bac4-caae4b74302b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:38:48.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9139" for this suite. Jun 7 14:38:54.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:38:54.775: INFO: namespace downward-api-9139 deletion completed in 6.10600438s • [SLOW TEST:10.268 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:38:54.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2087 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 14:38:54.827: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 14:39:16.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.38:8080/dial?request=hostName&protocol=http&host=10.244.1.37&port=8080&tries=1'] Namespace:pod-network-test-2087 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 14:39:16.937: INFO: >>> kubeConfig: /root/.kube/config I0607 14:39:16.972104 6 log.go:172] (0xc0009e02c0) (0xc0011f1a40) Create stream I0607 14:39:16.972156 6 log.go:172] (0xc0009e02c0) (0xc0011f1a40) Stream added, broadcasting: 1 I0607 14:39:16.974924 6 log.go:172] (0xc0009e02c0) Reply frame received for 1 I0607 14:39:16.974979 6 log.go:172] (0xc0009e02c0) (0xc001d7ee60) Create stream I0607 14:39:16.974998 6 log.go:172] (0xc0009e02c0) (0xc001d7ee60) Stream added, broadcasting: 3 I0607 14:39:16.975982 6 log.go:172] (0xc0009e02c0) Reply frame received for 3 I0607 14:39:16.976032 6 log.go:172] (0xc0009e02c0) (0xc00380e8c0) Create stream I0607 14:39:16.976046 6 log.go:172] (0xc0009e02c0) (0xc00380e8c0) Stream added, broadcasting: 5 I0607 14:39:16.977029 6 log.go:172] (0xc0009e02c0) Reply frame received for 5 I0607 14:39:17.074344 6 log.go:172] (0xc0009e02c0) Data frame received for 3 I0607 14:39:17.074392 6 log.go:172] (0xc001d7ee60) (3) Data frame handling I0607 14:39:17.074612 6 log.go:172] (0xc001d7ee60) (3) Data frame sent I0607 14:39:17.075350 6 log.go:172] (0xc0009e02c0) Data frame received for 3 I0607 14:39:17.075395 6 log.go:172] (0xc001d7ee60) (3) Data frame handling I0607 14:39:17.075687 6 log.go:172] (0xc0009e02c0) Data frame received for 5 I0607 14:39:17.075705 6 log.go:172] (0xc00380e8c0) (5) Data frame handling I0607 14:39:17.077832 6 log.go:172] (0xc0009e02c0) Data frame received for 1 I0607 14:39:17.077844 6 log.go:172] (0xc0011f1a40) (1) Data frame handling I0607 14:39:17.077850 6 log.go:172] (0xc0011f1a40) (1) Data frame sent I0607 14:39:17.077858 6 log.go:172] (0xc0009e02c0) (0xc0011f1a40) Stream removed, broadcasting: 1 I0607 14:39:17.077868 6 log.go:172] (0xc0009e02c0) Go away received I0607 14:39:17.077996 6 log.go:172] (0xc0009e02c0) (0xc0011f1a40) Stream removed, broadcasting: 1 I0607 14:39:17.078024 6 log.go:172] (0xc0009e02c0) (0xc001d7ee60) Stream removed, broadcasting: 3 I0607 14:39:17.078037 6 log.go:172] (0xc0009e02c0) (0xc00380e8c0) Stream removed, broadcasting: 5 Jun 7 14:39:17.078: INFO: Waiting for endpoints: map[] Jun 7 14:39:17.082: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.38:8080/dial?request=hostName&protocol=http&host=10.244.2.32&port=8080&tries=1'] Namespace:pod-network-test-2087 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 14:39:17.082: INFO: >>> kubeConfig: /root/.kube/config I0607 14:39:17.112945 6 log.go:172] (0xc000288bb0) (0xc0030aa500) Create stream I0607 14:39:17.112984 6 log.go:172] (0xc000288bb0) (0xc0030aa500) Stream added, broadcasting: 1 I0607 14:39:17.115488 6 log.go:172] (0xc000288bb0) Reply frame received for 1 I0607 14:39:17.115541 6 log.go:172] (0xc000288bb0) (0xc001d7f4a0) Create stream I0607 14:39:17.115558 6 log.go:172] (0xc000288bb0) (0xc001d7f4a0) Stream added, broadcasting: 3 I0607 14:39:17.116428 6 log.go:172] (0xc000288bb0) Reply frame received for 3 I0607 14:39:17.116469 6 log.go:172] (0xc000288bb0) (0xc0011f1b80) Create stream I0607 14:39:17.116481 6 log.go:172] (0xc000288bb0) (0xc0011f1b80) Stream added, broadcasting: 5 I0607 14:39:17.117378 6 log.go:172] (0xc000288bb0) Reply frame received for 5 I0607 14:39:17.176451 6 log.go:172] (0xc000288bb0) Data frame received for 3 I0607 14:39:17.176517 6 log.go:172] (0xc001d7f4a0) (3) Data frame handling I0607 14:39:17.176543 6 log.go:172] (0xc001d7f4a0) (3) Data frame sent I0607 14:39:17.177055 6 log.go:172] (0xc000288bb0) Data frame received for 3 I0607 14:39:17.177339 6 log.go:172] (0xc001d7f4a0) (3) Data frame handling I0607 14:39:17.177395 6 log.go:172] (0xc000288bb0) Data frame received for 5 I0607 14:39:17.177420 6 log.go:172] (0xc0011f1b80) (5) Data frame handling I0607 14:39:17.179394 6 log.go:172] (0xc000288bb0) Data frame received for 1 I0607 14:39:17.179434 6 log.go:172] (0xc0030aa500) (1) Data frame handling I0607 14:39:17.179473 6 log.go:172] (0xc0030aa500) (1) Data frame sent I0607 14:39:17.179502 6 log.go:172] (0xc000288bb0) (0xc0030aa500) Stream removed, broadcasting: 1 I0607 14:39:17.179522 6 log.go:172] (0xc000288bb0) Go away received I0607 14:39:17.179722 6 log.go:172] (0xc000288bb0) (0xc0030aa500) Stream removed, broadcasting: 1 I0607 14:39:17.179756 6 log.go:172] (0xc000288bb0) (0xc001d7f4a0) Stream removed, broadcasting: 3 I0607 14:39:17.179787 6 log.go:172] (0xc000288bb0) (0xc0011f1b80) Stream removed, broadcasting: 5 Jun 7 14:39:17.179: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:39:17.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2087" for this suite. Jun 7 14:39:39.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:39:39.282: INFO: namespace pod-network-test-2087 deletion completed in 22.096223374s • [SLOW TEST:44.506 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:39:39.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 7 14:39:39.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2837 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 7 14:39:42.906: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0607 14:39:42.833646 3712 log.go:172] (0xc0007209a0) (0xc0005ea140) Create stream\nI0607 14:39:42.833693 3712 log.go:172] (0xc0007209a0) (0xc0005ea140) Stream added, broadcasting: 1\nI0607 14:39:42.835895 3712 log.go:172] (0xc0007209a0) Reply frame received for 1\nI0607 14:39:42.835920 3712 log.go:172] (0xc0007209a0) (0xc00052f180) Create stream\nI0607 14:39:42.835929 3712 log.go:172] (0xc0007209a0) (0xc00052f180) Stream added, broadcasting: 3\nI0607 14:39:42.837056 3712 log.go:172] (0xc0007209a0) Reply frame received for 3\nI0607 14:39:42.837095 3712 log.go:172] (0xc0007209a0) (0xc0005ea1e0) Create stream\nI0607 14:39:42.837309 3712 log.go:172] (0xc0007209a0) (0xc0005ea1e0) Stream added, broadcasting: 5\nI0607 14:39:42.838320 3712 log.go:172] (0xc0007209a0) Reply frame received for 5\nI0607 14:39:42.838350 3712 log.go:172] (0xc0007209a0) (0xc00052f220) Create stream\nI0607 14:39:42.838359 3712 log.go:172] (0xc0007209a0) (0xc00052f220) Stream added, broadcasting: 7\nI0607 14:39:42.839208 3712 log.go:172] (0xc0007209a0) Reply frame received for 7\nI0607 14:39:42.839401 3712 log.go:172] (0xc00052f180) (3) Writing data frame\nI0607 14:39:42.839516 3712 log.go:172] (0xc00052f180) (3) Writing data frame\nI0607 14:39:42.840265 3712 log.go:172] (0xc0007209a0) Data frame received for 5\nI0607 14:39:42.840281 3712 log.go:172] (0xc0005ea1e0) (5) Data frame handling\nI0607 14:39:42.840296 3712 log.go:172] (0xc0005ea1e0) (5) Data frame sent\nI0607 14:39:42.840859 3712 log.go:172] (0xc0007209a0) Data frame received for 5\nI0607 14:39:42.840886 3712 log.go:172] (0xc0005ea1e0) (5) Data frame handling\nI0607 14:39:42.840903 3712 log.go:172] (0xc0005ea1e0) (5) Data frame sent\nI0607 14:39:42.884248 3712 log.go:172] (0xc0007209a0) Data frame received for 5\nI0607 14:39:42.884318 3712 log.go:172] (0xc0005ea1e0) (5) Data frame handling\nI0607 14:39:42.884354 3712 log.go:172] (0xc0007209a0) Data frame received for 7\nI0607 14:39:42.884393 3712 log.go:172] (0xc00052f220) (7) Data frame handling\nI0607 14:39:42.884843 3712 log.go:172] (0xc0007209a0) Data frame received for 1\nI0607 14:39:42.884866 3712 log.go:172] (0xc0005ea140) (1) Data frame handling\nI0607 14:39:42.884883 3712 log.go:172] (0xc0005ea140) (1) Data frame sent\nI0607 14:39:42.885105 3712 log.go:172] (0xc0007209a0) (0xc00052f180) Stream removed, broadcasting: 3\nI0607 14:39:42.885324 3712 log.go:172] (0xc0007209a0) (0xc0005ea140) Stream removed, broadcasting: 1\nI0607 14:39:42.885387 3712 log.go:172] (0xc0007209a0) (0xc0005ea140) Stream removed, broadcasting: 1\nI0607 14:39:42.885398 3712 log.go:172] (0xc0007209a0) (0xc00052f180) Stream removed, broadcasting: 3\nI0607 14:39:42.885403 3712 log.go:172] (0xc0007209a0) (0xc0005ea1e0) Stream removed, broadcasting: 5\nI0607 14:39:42.885534 3712 log.go:172] (0xc0007209a0) (0xc00052f220) Stream removed, broadcasting: 7\nI0607 14:39:42.885560 3712 log.go:172] (0xc0007209a0) Go away received\n" Jun 7 14:39:42.906: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:39:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2837" for this suite. Jun 7 14:39:50.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:39:51.027: INFO: namespace kubectl-2837 deletion completed in 6.11079904s • [SLOW TEST:11.745 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:39:51.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 7 14:39:51.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15" in namespace "downward-api-139" to be "success or failure" Jun 7 14:39:51.108: INFO: Pod "downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15": Phase="Pending", Reason="", readiness=false. Elapsed: 3.897665ms Jun 7 14:39:53.112: INFO: Pod "downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007800821s Jun 7 14:39:55.116: INFO: Pod "downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011916194s STEP: Saw pod success Jun 7 14:39:55.116: INFO: Pod "downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15" satisfied condition "success or failure" Jun 7 14:39:55.119: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15 container client-container: STEP: delete the pod Jun 7 14:39:55.156: INFO: Waiting for pod downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15 to disappear Jun 7 14:39:55.158: INFO: Pod downwardapi-volume-1674a951-19d3-4b8d-aa9d-37a5265afb15 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:39:55.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-139" for this suite. Jun 7 14:40:01.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:40:01.241: INFO: namespace downward-api-139 deletion completed in 6.078673163s • [SLOW TEST:10.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:40:01.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4921fbf6-30d8-4483-b823-2a716908cd1c STEP: Creating a pod to test consume secrets Jun 7 14:40:01.329: INFO: Waiting up to 5m0s for pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8" in namespace "secrets-3617" to be "success or failure" Jun 7 14:40:01.337: INFO: Pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.916137ms Jun 7 14:40:03.515: INFO: Pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185693685s Jun 7 14:40:05.519: INFO: Pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.189937584s Jun 7 14:40:07.524: INFO: Pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194226919s STEP: Saw pod success Jun 7 14:40:07.524: INFO: Pod "pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8" satisfied condition "success or failure" Jun 7 14:40:07.527: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8 container secret-env-test: STEP: delete the pod Jun 7 14:40:07.552: INFO: Waiting for pod pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8 to disappear Jun 7 14:40:07.559: INFO: Pod pod-secrets-a8581f6c-9514-4908-97b5-1f62d547b3e8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:40:07.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3617" for this suite. Jun 7 14:40:13.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:40:13.675: INFO: namespace secrets-3617 deletion completed in 6.113099896s • [SLOW TEST:12.434 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:40:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2482 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 14:40:13.751: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 14:40:37.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.36:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 14:40:37.904: INFO: >>> kubeConfig: /root/.kube/config I0607 14:40:37.942232 6 log.go:172] (0xc00351a000) (0xc00341c960) Create stream I0607 14:40:37.942266 6 log.go:172] (0xc00351a000) (0xc00341c960) Stream added, broadcasting: 1 I0607 14:40:37.944874 6 log.go:172] (0xc00351a000) Reply frame received for 1 I0607 14:40:37.944913 6 log.go:172] (0xc00351a000) (0xc002c834a0) Create stream I0607 14:40:37.944926 6 log.go:172] (0xc00351a000) (0xc002c834a0) Stream added, broadcasting: 3 I0607 14:40:37.946265 6 log.go:172] (0xc00351a000) Reply frame received for 3 I0607 14:40:37.946308 6 log.go:172] (0xc00351a000) (0xc002c83540) Create stream I0607 14:40:37.946322 6 log.go:172] (0xc00351a000) (0xc002c83540) Stream added, broadcasting: 5 I0607 14:40:37.947386 6 log.go:172] (0xc00351a000) Reply frame received for 5 I0607 14:40:38.022215 6 log.go:172] (0xc00351a000) Data frame received for 3 I0607 14:40:38.022261 6 log.go:172] (0xc002c834a0) (3) Data frame handling I0607 14:40:38.022304 6 log.go:172] (0xc002c834a0) (3) Data frame sent I0607 14:40:38.022329 6 log.go:172] (0xc00351a000) Data frame received for 3 I0607 14:40:38.022348 6 log.go:172] (0xc002c834a0) (3) Data frame handling I0607 14:40:38.022387 6 log.go:172] (0xc00351a000) Data frame received for 5 I0607 14:40:38.022401 6 log.go:172] (0xc002c83540) (5) Data frame handling I0607 14:40:38.024097 6 log.go:172] (0xc00351a000) Data frame received for 1 I0607 14:40:38.024113 6 log.go:172] (0xc00341c960) (1) Data frame handling I0607 14:40:38.024132 6 log.go:172] (0xc00341c960) (1) Data frame sent I0607 14:40:38.024144 6 log.go:172] (0xc00351a000) (0xc00341c960) Stream removed, broadcasting: 1 I0607 14:40:38.024233 6 log.go:172] (0xc00351a000) (0xc00341c960) Stream removed, broadcasting: 1 I0607 14:40:38.024243 6 log.go:172] (0xc00351a000) (0xc002c834a0) Stream removed, broadcasting: 3 I0607 14:40:38.024304 6 log.go:172] (0xc00351a000) Go away received I0607 14:40:38.024346 6 log.go:172] (0xc00351a000) (0xc002c83540) Stream removed, broadcasting: 5 Jun 7 14:40:38.024: INFO: Found all expected endpoints: [netserver-0] Jun 7 14:40:38.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.39:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 14:40:38.027: INFO: >>> kubeConfig: /root/.kube/config I0607 14:40:38.057519 6 log.go:172] (0xc00351a840) (0xc00341cb40) Create stream I0607 14:40:38.057556 6 log.go:172] (0xc00351a840) (0xc00341cb40) Stream added, broadcasting: 1 I0607 14:40:38.059812 6 log.go:172] (0xc00351a840) Reply frame received for 1 I0607 14:40:38.059859 6 log.go:172] (0xc00351a840) (0xc002764140) Create stream I0607 14:40:38.059875 6 log.go:172] (0xc00351a840) (0xc002764140) Stream added, broadcasting: 3 I0607 14:40:38.060909 6 log.go:172] (0xc00351a840) Reply frame received for 3 I0607 14:40:38.060959 6 log.go:172] (0xc00351a840) (0xc002c83680) Create stream I0607 14:40:38.060975 6 log.go:172] (0xc00351a840) (0xc002c83680) Stream added, broadcasting: 5 I0607 14:40:38.062568 6 log.go:172] (0xc00351a840) Reply frame received for 5 I0607 14:40:38.138353 6 log.go:172] (0xc00351a840) Data frame received for 5 I0607 14:40:38.138407 6 log.go:172] (0xc002c83680) (5) Data frame handling I0607 14:40:38.138440 6 log.go:172] (0xc00351a840) Data frame received for 3 I0607 14:40:38.138460 6 log.go:172] (0xc002764140) (3) Data frame handling I0607 14:40:38.138486 6 log.go:172] (0xc002764140) (3) Data frame sent I0607 14:40:38.138503 6 log.go:172] (0xc00351a840) Data frame received for 3 I0607 14:40:38.138519 6 log.go:172] (0xc002764140) (3) Data frame handling I0607 14:40:38.139743 6 log.go:172] (0xc00351a840) Data frame received for 1 I0607 14:40:38.139768 6 log.go:172] (0xc00341cb40) (1) Data frame handling I0607 14:40:38.139776 6 log.go:172] (0xc00341cb40) (1) Data frame sent I0607 14:40:38.139792 6 log.go:172] (0xc00351a840) (0xc00341cb40) Stream removed, broadcasting: 1 I0607 14:40:38.139828 6 log.go:172] (0xc00351a840) Go away received I0607 14:40:38.139928 6 log.go:172] (0xc00351a840) (0xc00341cb40) Stream removed, broadcasting: 1 I0607 14:40:38.139966 6 log.go:172] (0xc00351a840) (0xc002764140) Stream removed, broadcasting: 3 I0607 14:40:38.139992 6 log.go:172] (0xc00351a840) (0xc002c83680) Stream removed, broadcasting: 5 Jun 7 14:40:38.140: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:40:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2482" for this suite. Jun 7 14:41:00.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:41:00.283: INFO: namespace pod-network-test-2482 deletion completed in 22.139655352s • [SLOW TEST:46.608 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:41:00.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 7 14:41:00.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:00.500: INFO: Number of nodes with available pods: 0 Jun 7 14:41:00.500: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:01.506: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:01.509: INFO: Number of nodes with available pods: 0 Jun 7 14:41:01.509: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:02.505: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:02.509: INFO: Number of nodes with available pods: 0 Jun 7 14:41:02.509: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:03.553: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:03.555: INFO: Number of nodes with available pods: 0 Jun 7 14:41:03.555: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:04.504: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:04.507: INFO: Number of nodes with available pods: 0 Jun 7 14:41:04.507: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:05.504: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:05.507: INFO: Number of nodes with available pods: 2 Jun 7 14:41:05.507: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 7 14:41:05.525: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 14:41:05.537: INFO: Number of nodes with available pods: 2 Jun 7 14:41:05.537: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8565, will wait for the garbage collector to delete the pods Jun 7 14:41:06.711: INFO: Deleting DaemonSet.extensions daemon-set took: 6.66991ms Jun 7 14:41:07.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.259052ms Jun 7 14:41:12.228: INFO: Number of nodes with available pods: 0 Jun 7 14:41:12.228: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 14:41:12.231: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8565/daemonsets","resourceVersion":"15170057"},"items":null} Jun 7 14:41:12.235: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8565/pods","resourceVersion":"15170057"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:41:12.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8565" for this suite. Jun 7 14:41:18.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:41:18.344: INFO: namespace daemonsets-8565 deletion completed in 6.095582634s • [SLOW TEST:18.060 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:41:18.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 7 14:41:18.458: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 7 14:41:18.475: INFO: Number of nodes with available pods: 0 Jun 7 14:41:18.475: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 7 14:41:18.544: INFO: Number of nodes with available pods: 0 Jun 7 14:41:18.544: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:19.548: INFO: Number of nodes with available pods: 0 Jun 7 14:41:19.548: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:20.643: INFO: Number of nodes with available pods: 0 Jun 7 14:41:20.643: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:21.548: INFO: Number of nodes with available pods: 0 Jun 7 14:41:21.548: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:22.548: INFO: Number of nodes with available pods: 1 Jun 7 14:41:22.548: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 7 14:41:22.584: INFO: Number of nodes with available pods: 1 Jun 7 14:41:22.584: INFO: Number of running nodes: 0, number of available pods: 1 Jun 7 14:41:23.588: INFO: Number of nodes with available pods: 0 Jun 7 14:41:23.588: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 7 14:41:23.597: INFO: Number of nodes with available pods: 0 Jun 7 14:41:23.597: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:24.601: INFO: Number of nodes with available pods: 0 Jun 7 14:41:24.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:25.602: INFO: Number of nodes with available pods: 0 Jun 7 14:41:25.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:26.601: INFO: Number of nodes with available pods: 0 Jun 7 14:41:26.601: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:27.601: INFO: Number of nodes with available pods: 0 Jun 7 14:41:27.601: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:28.602: INFO: Number of nodes with available pods: 0 Jun 7 14:41:28.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:29.602: INFO: Number of nodes with available pods: 0 Jun 7 14:41:29.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:30.606: INFO: Number of nodes with available pods: 0 Jun 7 14:41:30.606: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:31.601: INFO: Number of nodes with available pods: 0 Jun 7 14:41:31.601: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:32.602: INFO: Number of nodes with available pods: 0 Jun 7 14:41:32.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:33.631: INFO: Number of nodes with available pods: 0 Jun 7 14:41:33.631: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:34.602: INFO: Number of nodes with available pods: 0 Jun 7 14:41:34.602: INFO: Node iruya-worker is running more than one daemon pod Jun 7 14:41:35.601: INFO: Number of nodes with available pods: 1 Jun 7 14:41:35.601: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6294, will wait for the garbage collector to delete the pods Jun 7 14:41:35.674: INFO: Deleting DaemonSet.extensions daemon-set took: 14.091191ms Jun 7 14:41:35.974: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.363079ms Jun 7 14:41:42.279: INFO: Number of nodes with available pods: 0 Jun 7 14:41:42.279: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 14:41:42.282: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6294/daemonsets","resourceVersion":"15170192"},"items":null} Jun 7 14:41:42.285: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6294/pods","resourceVersion":"15170192"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:41:42.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6294" for this suite. Jun 7 14:41:48.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:41:48.508: INFO: namespace daemonsets-6294 deletion completed in 6.173192212s • [SLOW TEST:30.163 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:41:48.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6b2c167c-1bfe-4c57-81b3-0e157e3ae808 STEP: Creating a pod to test consume configMaps Jun 7 14:41:48.621: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21" in namespace "projected-5226" to be "success or failure" Jun 7 14:41:48.637: INFO: Pod "pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21": Phase="Pending", Reason="", readiness=false. Elapsed: 16.319649ms Jun 7 14:41:50.656: INFO: Pod "pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034519422s Jun 7 14:41:52.918: INFO: Pod "pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297094604s STEP: Saw pod success Jun 7 14:41:52.918: INFO: Pod "pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21" satisfied condition "success or failure" Jun 7 14:41:52.922: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21 container projected-configmap-volume-test: STEP: delete the pod Jun 7 14:41:53.327: INFO: Waiting for pod pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21 to disappear Jun 7 14:41:53.350: INFO: Pod pod-projected-configmaps-a3dcceb9-a144-4c09-b923-a28911913f21 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:41:53.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5226" for this suite. Jun 7 14:41:59.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:41:59.451: INFO: namespace projected-5226 deletion completed in 6.096768491s • [SLOW TEST:10.943 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:41:59.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ec368873-6d5c-4c11-bdfe-947bc7e6556b STEP: Creating a pod to test consume secrets Jun 7 14:41:59.528: INFO: Waiting up to 5m0s for pod "pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df" in namespace "secrets-3862" to be "success or failure" Jun 7 14:41:59.539: INFO: Pod "pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.5924ms Jun 7 14:42:01.600: INFO: Pod "pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072432754s Jun 7 14:42:03.605: INFO: Pod "pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076826783s STEP: Saw pod success Jun 7 14:42:03.605: INFO: Pod "pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df" satisfied condition "success or failure" Jun 7 14:42:03.608: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df container secret-volume-test: STEP: delete the pod Jun 7 14:42:03.678: INFO: Waiting for pod pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df to disappear Jun 7 14:42:03.681: INFO: Pod pod-secrets-ab0d4137-3824-4b70-a903-1b740ff464df no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:42:03.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3862" for this suite. Jun 7 14:42:09.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:42:09.777: INFO: namespace secrets-3862 deletion completed in 6.092206227s • [SLOW TEST:10.326 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:42:09.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 7 14:42:09.828: INFO: PodSpec: initContainers in spec.initContainers Jun 7 14:43:01.271: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-01ad6492-7adc-49fe-bf2e-1d62b60c9c4e", GenerateName:"", Namespace:"init-container-2630", SelfLink:"/api/v1/namespaces/init-container-2630/pods/pod-init-01ad6492-7adc-49fe-bf2e-1d62b60c9c4e", UID:"e8765358-0976-42dc-a8bf-fb33cc389f2d", ResourceVersion:"15170441", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727137729, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"828199817"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-thhct", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ed6000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thhct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thhct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thhct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001edc088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022f2060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001edc110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001edc130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001edc138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001edc13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137729, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137729, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137729, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727137729, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.41", StartTime:(*v1.Time)(0xc0030fc240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025eecb0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025eed20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://346f3911f8aa7233872dee8166984af5a2160c1c99e68b59d020d299c4bce4e1"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030fc280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030fc260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:43:01.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2630" for this suite. Jun 7 14:43:23.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:43:23.446: INFO: namespace init-container-2630 deletion completed in 22.112639925s • [SLOW TEST:73.668 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 7 14:43:23.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2059 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2059 to expose endpoints map[] Jun 7 14:43:23.615: INFO: Get endpoints failed (9.583168ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 7 14:43:24.619: INFO: successfully validated that service multi-endpoint-test in namespace services-2059 exposes endpoints map[] (1.013426083s elapsed) STEP: Creating pod pod1 in namespace services-2059 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2059 to expose endpoints map[pod1:[100]] Jun 7 14:43:28.666: INFO: successfully validated that service multi-endpoint-test in namespace services-2059 exposes endpoints map[pod1:[100]] (4.039752764s elapsed) STEP: Creating pod pod2 in namespace services-2059 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2059 to expose endpoints map[pod1:[100] pod2:[101]] Jun 7 14:43:31.768: INFO: successfully validated that service multi-endpoint-test in namespace services-2059 exposes endpoints map[pod1:[100] pod2:[101]] (3.09783215s elapsed) STEP: Deleting pod pod1 in namespace services-2059 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2059 to expose endpoints map[pod2:[101]] Jun 7 14:43:32.795: INFO: successfully validated that service multi-endpoint-test in namespace services-2059 exposes endpoints map[pod2:[101]] (1.02211573s elapsed) STEP: Deleting pod pod2 in namespace services-2059 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2059 to expose endpoints map[] Jun 7 14:43:33.826: INFO: successfully validated that service multi-endpoint-test in namespace services-2059 exposes endpoints map[] (1.018078791s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 7 14:43:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2059" for this suite. Jun 7 14:43:55.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 7 14:43:55.936: INFO: namespace services-2059 deletion completed in 22.085563909s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.490 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSJun 7 14:43:55.936: INFO: Running AfterSuite actions on all nodes Jun 7 14:43:55.936: INFO: Running AfterSuite actions on node 1 Jun 7 14:43:55.936: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6481.086 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS