I0607 12:55:54.666594 6 e2e.go:243] Starting e2e run "c47f29a4-0a06-4452-bdd7-01d332ca5e07" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1591534553 - Will randomize all specs
Will run 215 of 4412 specs
Jun 7 12:55:54.859: INFO: >>> kubeConfig: /root/.kube/config
Jun 7 12:55:54.861: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 7 12:55:54.881: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 7 12:55:54.919: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 7 12:55:54.919: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Jun 7 12:55:54.919: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun 7 12:55:54.925: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed)
Jun 7 12:55:54.925: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed)
Jun 7 12:55:54.925: INFO: e2e test version: v1.15.11
Jun 7 12:55:54.926: INFO: kube-apiserver version: v1.15.7
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1
should proxy through a service and a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:55:54.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
Jun 7 12:55:54.998: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-wtd9f in namespace proxy-8083
I0607 12:55:55.035742 6 runners.go:180] Created replication controller with name: proxy-service-wtd9f, namespace: proxy-8083, replica count: 1
I0607 12:55:56.086103 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 12:55:57.086287 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 12:55:58.086481 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 12:55:59.086708 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:00.086928 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:01.087149 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:02.087373 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:03.087663 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:04.087873 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:05.088081 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:06.088294 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:07.088531 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
I0607 12:56:08.088726 6 runners.go:180] proxy-service-wtd9f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Jun 7 12:56:08.091: INFO: setup took 13.091498356s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 14.294555ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 14.419559ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 14.336906ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 14.263606ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 14.304933ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 14.54656ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 14.432847ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 14.415312ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 14.559081ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 14.754165ms)
Jun 7 12:56:08.106: INFO: (0) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 14.67665ms)
Jun 7 12:56:08.107: INFO: (0) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 5.075664ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 5.088229ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 5.087361ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 5.161191ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.318957ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 5.204575ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.438021ms)
Jun 7 12:56:08.117: INFO: (1) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 3.41039ms)
Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.480137ms)
Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 3.638336ms)
Jun 7 12:56:08.127: INFO: (2) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.63877ms)
Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.954991ms)
Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.909699ms)
Jun 7 12:56:08.128: INFO: (2) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 2.958714ms)
Jun 7 12:56:08.132: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.442195ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.62088ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.740661ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.678898ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.766225ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.683811ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.741634ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.272658ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 5.2176ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.253684ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 5.234115ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.244089ms)
Jun 7 12:56:08.134: INFO: (3) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 5.322934ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.364981ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.338765ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.35214ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.368158ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.523301ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.797127ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.835729ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.905406ms)
Jun 7 12:56:08.139: INFO: (4) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.843329ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.827786ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.844706ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.986596ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.779049ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.76615ms)
Jun 7 12:56:08.140: INFO: (4) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.813339ms)
Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.716777ms)
Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.747683ms)
Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.943076ms)
Jun 7 12:56:08.144: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.908783ms)
Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.428086ms)
Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.599704ms)
Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.616336ms)
Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.711316ms)
Jun 7 12:56:08.145: INFO: (5) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.846257ms)
Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 7.248098ms)
Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.338346ms)
Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 7.553293ms)
Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 7.673184ms)
Jun 7 12:56:08.148: INFO: (5) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 7.671617ms)
Jun 7 12:56:08.149: INFO: (5) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 8.047127ms)
Jun 7 12:56:08.152: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.263496ms)
Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.274686ms)
Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.291162ms)
Jun 7 12:56:08.153: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.4605ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.870924ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.921789ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 5.001634ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.89425ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.014394ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.07202ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 5.123601ms)
Jun 7 12:56:08.154: INFO: (6) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 6.612245ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.648766ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.646478ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 6.601721ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.801028ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 6.685024ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 6.758521ms)
Jun 7 12:56:08.161: INFO: (7) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 6.726696ms)
Jun 7 12:56:08.162: INFO: (7) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 7.847549ms)
Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 7.994667ms)
Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.963576ms)
Jun 7 12:56:08.163: INFO: (7) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 8.161204ms)
Jun 7 12:56:08.166: INFO: (8) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.576321ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.705946ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.799086ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.872229ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.973839ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.070278ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.200508ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.118761ms)
Jun 7 12:56:08.167: INFO: (8) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.676635ms)
Jun 7 12:56:08.168: INFO: (8) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.604996ms)
Jun 7 12:56:08.168: INFO: (8) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.767775ms)
Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.923995ms)
Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.937345ms)
Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.940911ms)
Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.14931ms)
Jun 7 12:56:08.172: INFO: (9) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.706424ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.855805ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 6.245777ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 6.443636ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.510533ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 6.491924ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 6.586426ms)
Jun 7 12:56:08.174: INFO: (9) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 6.751688ms)
Jun 7 12:56:08.175: INFO: (9) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 7.345203ms)
Jun 7 12:56:08.175: INFO: (9) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.022781ms)
Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.019609ms)
Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.132132ms)
Jun 7 12:56:08.182: INFO: (10) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.677617ms)
Jun 7 12:56:08.183: INFO: (10) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.708304ms)
Jun 7 12:56:08.183: INFO: (10) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.793726ms)
Jun 7 12:56:08.184: INFO: (10) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 5.598005ms)
Jun 7 12:56:08.184: INFO: (10) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.669908ms)
Jun 7 12:56:08.185: INFO: (10) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 6.563844ms)
Jun 7 12:56:08.187: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.070846ms)
Jun 7 12:56:08.189: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.257881ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.488187ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.816534ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.813072ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 5.061898ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.097203ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.136748ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 5.271406ms)
Jun 7 12:56:08.190: INFO: (11) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 5.250424ms)
Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.569098ms)
Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 5.91823ms)
Jun 7 12:56:08.191: INFO: (11) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 6.277133ms)
Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.10851ms)
Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.228365ms)
Jun 7 12:56:08.196: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.551512ms)
Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.704762ms)
Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 5.835684ms)
Jun 7 12:56:08.197: INFO: (12) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 5.898817ms)
Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 6.346306ms)
Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 6.374366ms)
Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 6.367306ms)
Jun 7 12:56:08.198: INFO: (12) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 6.424711ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.707469ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.04856ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.048703ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.069012ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.122172ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.0442ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.073176ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 4.150494ms)
Jun 7 12:56:08.202: INFO: (13) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.342351ms)
Jun 7 12:56:08.203: INFO: (13) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 4.243725ms)
Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.245282ms)
Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.211484ms)
Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.311728ms)
Jun 7 12:56:08.207: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 4.254735ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.502458ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 4.662329ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.69496ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.707799ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.783938ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.853318ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.808531ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.835401ms)
Jun 7 12:56:08.208: INFO: (14) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.824616ms)
Jun 7 12:56:08.210: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.066415ms)
Jun 7 12:56:08.211: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.269219ms)
Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.130355ms)
Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.240534ms)
Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 4.366716ms)
Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 4.336004ms)
Jun 7 12:56:08.212: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 4.417249ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 5.719832ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 5.763953ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 6.043116ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.990662ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 6.096384ms)
Jun 7 12:56:08.214: INFO: (15) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 1.753793ms)
Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.183039ms)
Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.129581ms)
Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.33943ms)
Jun 7 12:56:08.218: INFO: (16) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 3.608428ms)
Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.765358ms)
Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test<... (200; 4.442759ms)
Jun 7 12:56:08.219: INFO: (16) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.5452ms)
Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 2.522368ms)
Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 2.680419ms)
Jun 7 12:56:08.222: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 2.677004ms)
Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.195523ms)
Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.237775ms)
Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 3.581772ms)
Jun 7 12:56:08.223: INFO: (17) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.552145ms)
Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.66691ms)
Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.787423ms)
Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.895672ms)
Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.867006ms)
Jun 7 12:56:08.224: INFO: (17) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 5.203551ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.193249ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 3.199757ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: test (200; 3.482414ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 3.650634ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:1080/proxy/: ... (200; 3.61776ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 3.633919ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 3.671769ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.74882ms)
Jun 7 12:56:08.228: INFO: (18) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 3.946626ms)
Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.452646ms)
Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.553592ms)
Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.820356ms)
Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.814681ms)
Jun 7 12:56:08.229: INFO: (18) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.876142ms)
Jun 7 12:56:08.231: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:460/proxy/: tls baz (200; 1.906574ms)
Jun 7 12:56:08.232: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:1080/proxy/: test<... (200; 2.626739ms)
Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:443/proxy/: ... (200; 2.928885ms)
Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng/proxy/: test (200; 3.387938ms)
Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.447009ms)
Jun 7 12:56:08.233: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:162/proxy/: bar (200; 3.480744ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/http:proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.037225ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname2/proxy/: bar (200; 4.026984ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/proxy-service-wtd9f:portname1/proxy/: foo (200; 4.036166ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/https:proxy-service-wtd9f-wf8ng:462/proxy/: tls qux (200; 4.156089ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname1/proxy/: tls baz (200; 4.180305ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname2/proxy/: bar (200; 4.133525ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/https:proxy-service-wtd9f:tlsportname2/proxy/: tls qux (200; 4.187643ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/services/http:proxy-service-wtd9f:portname1/proxy/: foo (200; 4.153328ms)
Jun 7 12:56:08.234: INFO: (19) /api/v1/namespaces/proxy-8083/pods/proxy-service-wtd9f-wf8ng:160/proxy/: foo (200; 4.130927ms)
STEP: deleting ReplicationController proxy-service-wtd9f in namespace proxy-8083, will wait for the garbage collector to delete the pods
Jun 7 12:56:08.292: INFO: Deleting ReplicationController proxy-service-wtd9f took: 6.397701ms
Jun 7 12:56:08.592: INFO: Terminating ReplicationController proxy-service-wtd9f pods took: 300.287644ms
[AfterEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:56:22.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8083" for this suite.
Jun 7 12:56:28.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:56:28.330: INFO: namespace proxy-8083 deletion completed in 6.111498505s
• [SLOW TEST:33.404 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
should proxy through a service and a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch
should add annotations for pods in rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:56:28.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jun 7 12:56:28.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1647'
Jun 7 12:56:31.745: INFO: stderr: ""
Jun 7 12:56:31.745: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jun 7 12:56:32.769: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:32.769: INFO: Found 0 / 1
Jun 7 12:56:33.750: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:33.750: INFO: Found 0 / 1
Jun 7 12:56:34.749: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:34.749: INFO: Found 0 / 1
Jun 7 12:56:35.751: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:35.751: INFO: Found 1 / 1
Jun 7 12:56:35.751: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
STEP: patching all pods
Jun 7 12:56:35.754: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:35.754: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 7 12:56:35.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4c4zn --namespace=kubectl-1647 -p {"metadata":{"annotations":{"x":"y"}}}'
Jun 7 12:56:35.867: INFO: stderr: ""
Jun 7 12:56:35.867: INFO: stdout: "pod/redis-master-4c4zn patched\n"
STEP: checking annotations
Jun 7 12:56:35.870: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 12:56:35.870: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:56:35.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1647" for this suite.
Jun 7 12:56:57.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:56:57.976: INFO: namespace kubectl-1647 deletion completed in 22.104072433s
• [SLOW TEST:29.646 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl patch
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should add annotations for pods in rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:56:57.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1e378fbf-df4d-45f2-bf15-2cf7d906f465
STEP: Creating a pod to test consume configMaps
Jun 7 12:56:58.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2" in namespace "configmap-5412" to be "success or failure"
Jun 7 12:56:58.126: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.752025ms
Jun 7 12:57:00.131: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015249902s
Jun 7 12:57:02.135: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019247762s
STEP: Saw pod success
Jun 7 12:57:02.135: INFO: Pod "pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2" satisfied condition "success or failure"
Jun 7 12:57:02.137: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 container configmap-volume-test:
STEP: delete the pod
Jun 7 12:57:02.171: INFO: Waiting for pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 to disappear
Jun 7 12:57:02.180: INFO: Pod pod-configmaps-85cc2eca-fdfd-4a1e-a290-75dadbc3fae2 no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:57:02.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5412" for this suite.
Jun 7 12:57:08.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:57:08.276: INFO: namespace configmap-5412 deletion completed in 6.093307311s
• [SLOW TEST:10.299 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:57:08.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 7 12:57:08.357: INFO: Waiting up to 5m0s for pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc" in namespace "emptydir-7687" to be "success or failure"
Jun 7 12:57:08.370: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.856038ms
Jun 7 12:57:10.375: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017439834s
Jun 7 12:57:12.379: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021828655s
STEP: Saw pod success
Jun 7 12:57:12.379: INFO: Pod "pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc" satisfied condition "success or failure"
Jun 7 12:57:12.382: INFO: Trying to get logs from node iruya-worker pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc container test-container:
STEP: delete the pod
Jun 7 12:57:12.402: INFO: Waiting for pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc to disappear
Jun 7 12:57:12.434: INFO: Pod pod-c7720ff1-4f9a-4258-b4fd-fe53ce55c3cc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:57:12.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7687" for this suite.
Jun 7 12:57:18.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:57:18.533: INFO: namespace emptydir-7687 deletion completed in 6.095039204s
• [SLOW TEST:10.256 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:57:18.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d6a926ee-7a00-48a7-9d15-4c7260a24278
STEP: Creating a pod to test consume configMaps
Jun 7 12:57:18.613: INFO: Waiting up to 5m0s for pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366" in namespace "configmap-9615" to be "success or failure"
Jun 7 12:57:18.617: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Pending", Reason="", readiness=false. Elapsed: 3.325001ms
Jun 7 12:57:20.621: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007019076s
Jun 7 12:57:22.624: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010683778s
STEP: Saw pod success
Jun 7 12:57:22.624: INFO: Pod "pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366" satisfied condition "success or failure"
Jun 7 12:57:22.627: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 container configmap-volume-test:
STEP: delete the pod
Jun 7 12:57:22.742: INFO: Waiting for pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 to disappear
Jun 7 12:57:22.755: INFO: Pod pod-configmaps-de2824e3-b8f3-4007-ae2f-4af970686366 no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:57:22.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9615" for this suite.
Jun 7 12:57:28.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:57:28.885: INFO: namespace configmap-9615 deletion completed in 6.123453711s
• [SLOW TEST:10.352 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:57:28.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-986226df-648c-4226-8bb4-caaf1d72a570
STEP: Creating secret with name s-test-opt-upd-6ddaf328-98ee-4508-a392-931fd9f93c3b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-986226df-648c-4226-8bb4-caaf1d72a570
STEP: Updating secret s-test-opt-upd-6ddaf328-98ee-4508-a392-931fd9f93c3b
STEP: Creating secret with name s-test-opt-create-d9e98acf-f3d4-420a-b673-efdd4b782174
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:59:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-257" for this suite.
Jun 7 12:59:23.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:59:23.647: INFO: namespace projected-257 deletion completed in 22.108805012s
• [SLOW TEST:114.762 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:59:23.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f8367e36-35a3-4456-a55e-774f4fc2cf67
STEP: Creating a pod to test consume configMaps
Jun 7 12:59:23.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3" in namespace "configmap-6132" to be "success or failure"
Jun 7 12:59:23.715: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449369ms
Jun 7 12:59:25.728: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015938832s
Jun 7 12:59:27.734: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02127956s
STEP: Saw pod success
Jun 7 12:59:27.734: INFO: Pod "pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3" satisfied condition "success or failure"
Jun 7 12:59:27.737: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 container configmap-volume-test:
STEP: delete the pod
Jun 7 12:59:27.917: INFO: Waiting for pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 to disappear
Jun 7 12:59:27.955: INFO: Pod pod-configmaps-0cf17052-4292-4d30-bc45-89fe5a3be8c3 no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:59:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6132" for this suite.
Jun 7 12:59:33.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:59:34.068: INFO: namespace configmap-6132 deletion completed in 6.109275322s
• [SLOW TEST:10.421 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance]
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:59:34.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jun 7 12:59:34.101: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 12:59:39.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-211" for this suite.
Jun 7 12:59:45.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 12:59:45.720: INFO: namespace init-container-211 deletion completed in 6.092655958s
• [SLOW TEST:11.652 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with secret pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 12:59:45.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-7k2q
STEP: Creating a pod to test atomic-volume-subpath
Jun 7 12:59:45.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7k2q" in namespace "subpath-8022" to be "success or failure"
Jun 7 12:59:45.811: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1378ms
Jun 7 12:59:47.815: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00685614s
Jun 7 12:59:49.819: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 4.010943969s
Jun 7 12:59:51.824: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 6.015524345s
Jun 7 12:59:53.828: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 8.01991417s
Jun 7 12:59:55.832: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 10.024240622s
Jun 7 12:59:57.837: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 12.028560762s
Jun 7 12:59:59.841: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 14.03327958s
Jun 7 13:00:01.846: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 16.037505715s
Jun 7 13:00:03.849: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 18.041209942s
Jun 7 13:00:05.854: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 20.045464835s
Jun 7 13:00:07.858: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Running", Reason="", readiness=true. Elapsed: 22.049926142s
Jun 7 13:00:09.862: INFO: Pod "pod-subpath-test-secret-7k2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053591642s
STEP: Saw pod success
Jun 7 13:00:09.862: INFO: Pod "pod-subpath-test-secret-7k2q" satisfied condition "success or failure"
Jun 7 13:00:09.864: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-7k2q container test-container-subpath-secret-7k2q:
STEP: delete the pod
Jun 7 13:00:10.048: INFO: Waiting for pod pod-subpath-test-secret-7k2q to disappear
Jun 7 13:00:10.197: INFO: Pod pod-subpath-test-secret-7k2q no longer exists
STEP: Deleting pod pod-subpath-test-secret-7k2q
Jun 7 13:00:10.197: INFO: Deleting pod "pod-subpath-test-secret-7k2q" in namespace "subpath-8022"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:00:10.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8022" for this suite.
Jun 7 13:00:16.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:00:16.361: INFO: namespace subpath-8022 deletion completed in 6.133384752s
• [SLOW TEST:30.641 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
should support subpaths with secret pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency
should not be very high [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:00:16.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8948
I0607 13:00:16.409621 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8948, replica count: 1
I0607 13:00:17.459999 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 13:00:18.460232 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 13:00:19.460423 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 13:00:20.460656 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0607 13:00:21.460955 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Jun 7 13:00:21.612: INFO: Created: latency-svc-lvr2f
Jun 7 13:00:21.642: INFO: Got endpoints: latency-svc-lvr2f [81.599607ms]
Jun 7 13:00:21.731: INFO: Created: latency-svc-kgkwf
Jun 7 13:00:21.736: INFO: Got endpoints: latency-svc-kgkwf [93.427962ms]
Jun 7 13:00:21.804: INFO: Created: latency-svc-fgzbn
Jun 7 13:00:21.821: INFO: Got endpoints: latency-svc-fgzbn [178.119879ms]
Jun 7 13:00:21.905: INFO: Created: latency-svc-7kq49
Jun 7 13:00:21.938: INFO: Got endpoints: latency-svc-7kq49 [295.994591ms]
Jun 7 13:00:21.969: INFO: Created: latency-svc-2hv58
Jun 7 13:00:21.982: INFO: Got endpoints: latency-svc-2hv58 [339.741627ms]
Jun 7 13:00:22.048: INFO: Created: latency-svc-6h8tp
Jun 7 13:00:22.055: INFO: Got endpoints: latency-svc-6h8tp [412.467775ms]
Jun 7 13:00:22.074: INFO: Created: latency-svc-t8nbm
Jun 7 13:00:22.085: INFO: Got endpoints: latency-svc-t8nbm [442.283414ms]
Jun 7 13:00:22.104: INFO: Created: latency-svc-7cm8j
Jun 7 13:00:22.130: INFO: Got endpoints: latency-svc-7cm8j [487.771226ms]
Jun 7 13:00:22.210: INFO: Created: latency-svc-vk2ww
Jun 7 13:00:22.213: INFO: Got endpoints: latency-svc-vk2ww [570.479582ms]
Jun 7 13:00:22.272: INFO: Created: latency-svc-9zf58
Jun 7 13:00:22.284: INFO: Got endpoints: latency-svc-9zf58 [640.946806ms]
Jun 7 13:00:22.302: INFO: Created: latency-svc-2q5g8
Jun 7 13:00:22.349: INFO: Got endpoints: latency-svc-2q5g8 [706.461337ms]
Jun 7 13:00:22.368: INFO: Created: latency-svc-bdp9n
Jun 7 13:00:22.381: INFO: Got endpoints: latency-svc-bdp9n [738.119249ms]
Jun 7 13:00:22.401: INFO: Created: latency-svc-svgwg
Jun 7 13:00:22.417: INFO: Got endpoints: latency-svc-svgwg [774.210334ms]
Jun 7 13:00:22.485: INFO: Created: latency-svc-psmtq
Jun 7 13:00:22.495: INFO: Got endpoints: latency-svc-psmtq [852.011299ms]
Jun 7 13:00:22.515: INFO: Created: latency-svc-8k4l9
Jun 7 13:00:22.526: INFO: Got endpoints: latency-svc-8k4l9 [882.817794ms]
Jun 7 13:00:22.543: INFO: Created: latency-svc-pwdcf
Jun 7 13:00:22.556: INFO: Got endpoints: latency-svc-pwdcf [913.087523ms]
Jun 7 13:00:22.572: INFO: Created: latency-svc-crzlg
Jun 7 13:00:22.628: INFO: Got endpoints: latency-svc-crzlg [892.234623ms]
Jun 7 13:00:22.630: INFO: Created: latency-svc-zz45t
Jun 7 13:00:22.647: INFO: Got endpoints: latency-svc-zz45t [825.747455ms]
Jun 7 13:00:22.700: INFO: Created: latency-svc-57fn2
Jun 7 13:00:22.719: INFO: Got endpoints: latency-svc-57fn2 [780.040899ms]
Jun 7 13:00:22.808: INFO: Created: latency-svc-bgqvs
Jun 7 13:00:22.815: INFO: Got endpoints: latency-svc-bgqvs [832.536949ms]
Jun 7 13:00:22.866: INFO: Created: latency-svc-rth8k
Jun 7 13:00:22.898: INFO: Got endpoints: latency-svc-rth8k [842.781711ms]
Jun 7 13:00:22.947: INFO: Created: latency-svc-hn4qs
Jun 7 13:00:22.964: INFO: Got endpoints: latency-svc-hn4qs [878.779208ms]
Jun 7 13:00:22.992: INFO: Created: latency-svc-thb87
Jun 7 13:00:23.009: INFO: Got endpoints: latency-svc-thb87 [878.359359ms]
Jun 7 13:00:23.028: INFO: Created: latency-svc-ds29d
Jun 7 13:00:23.044: INFO: Got endpoints: latency-svc-ds29d [830.689763ms]
Jun 7 13:00:23.091: INFO: Created: latency-svc-9skhd
Jun 7 13:00:23.098: INFO: Got endpoints: latency-svc-9skhd [814.819994ms]
Jun 7 13:00:23.126: INFO: Created: latency-svc-87n4c
Jun 7 13:00:23.140: INFO: Got endpoints: latency-svc-87n4c [791.190619ms]
Jun 7 13:00:23.180: INFO: Created: latency-svc-vb94z
Jun 7 13:00:23.221: INFO: Got endpoints: latency-svc-vb94z [840.217768ms]
Jun 7 13:00:23.280: INFO: Created: latency-svc-7xdwt
Jun 7 13:00:23.291: INFO: Got endpoints: latency-svc-7xdwt [873.898234ms]
Jun 7 13:00:23.359: INFO: Created: latency-svc-22nzd
Jun 7 13:00:23.385: INFO: Created: latency-svc-8qfbh
Jun 7 13:00:23.385: INFO: Got endpoints: latency-svc-22nzd [889.752504ms]
Jun 7 13:00:23.408: INFO: Got endpoints: latency-svc-8qfbh [882.466242ms]
Jun 7 13:00:23.438: INFO: Created: latency-svc-bzbcn
Jun 7 13:00:23.497: INFO: Got endpoints: latency-svc-bzbcn [941.031289ms]
Jun 7 13:00:23.507: INFO: Created: latency-svc-77p82
Jun 7 13:00:23.520: INFO: Got endpoints: latency-svc-77p82 [892.17056ms]
Jun 7 13:00:23.538: INFO: Created: latency-svc-lld2d
Jun 7 13:00:23.551: INFO: Got endpoints: latency-svc-lld2d [904.083966ms]
Jun 7 13:00:23.570: INFO: Created: latency-svc-jxxwd
Jun 7 13:00:23.581: INFO: Got endpoints: latency-svc-jxxwd [862.552973ms]
Jun 7 13:00:23.642: INFO: Created: latency-svc-wggjg
Jun 7 13:00:23.644: INFO: Got endpoints: latency-svc-wggjg [828.835733ms]
Jun 7 13:00:23.672: INFO: Created: latency-svc-95jmz
Jun 7 13:00:23.684: INFO: Got endpoints: latency-svc-95jmz [786.483925ms]
Jun 7 13:00:23.707: INFO: Created: latency-svc-g2n2d
Jun 7 13:00:23.720: INFO: Got endpoints: latency-svc-g2n2d [756.542693ms]
Jun 7 13:00:23.779: INFO: Created: latency-svc-c8s67
Jun 7 13:00:23.782: INFO: Got endpoints: latency-svc-c8s67 [772.990158ms]
Jun 7 13:00:23.808: INFO: Created: latency-svc-6lj7q
Jun 7 13:00:23.829: INFO: Got endpoints: latency-svc-6lj7q [784.905623ms]
Jun 7 13:00:23.870: INFO: Created: latency-svc-tfhrd
Jun 7 13:00:23.916: INFO: Got endpoints: latency-svc-tfhrd [817.287212ms]
Jun 7 13:00:23.917: INFO: Created: latency-svc-5bc47
Jun 7 13:00:23.931: INFO: Got endpoints: latency-svc-5bc47 [790.720087ms]
Jun 7 13:00:23.950: INFO: Created: latency-svc-fqrwn
Jun 7 13:00:23.980: INFO: Got endpoints: latency-svc-fqrwn [758.597645ms]
Jun 7 13:00:24.066: INFO: Created: latency-svc-vh8xb
Jun 7 13:00:24.071: INFO: Got endpoints: latency-svc-vh8xb [780.402281ms]
Jun 7 13:00:24.108: INFO: Created: latency-svc-cw7nf
Jun 7 13:00:24.121: INFO: Got endpoints: latency-svc-cw7nf [736.031954ms]
Jun 7 13:00:24.135: INFO: Created: latency-svc-88lm2
Jun 7 13:00:24.148: INFO: Got endpoints: latency-svc-88lm2 [740.336679ms]
Jun 7 13:00:24.166: INFO: Created: latency-svc-wbr7j
Jun 7 13:00:24.228: INFO: Got endpoints: latency-svc-wbr7j [730.572061ms]
Jun 7 13:00:24.234: INFO: Created: latency-svc-gtxqx
Jun 7 13:00:24.251: INFO: Got endpoints: latency-svc-gtxqx [730.790866ms]
Jun 7 13:00:24.288: INFO: Created: latency-svc-xnbcs
Jun 7 13:00:24.300: INFO: Got endpoints: latency-svc-xnbcs [748.833835ms]
Jun 7 13:00:24.316: INFO: Created: latency-svc-pfkdc
Jun 7 13:00:24.359: INFO: Got endpoints: latency-svc-pfkdc [777.498947ms]
Jun 7 13:00:24.369: INFO: Created: latency-svc-l7xzm
Jun 7 13:00:24.385: INFO: Got endpoints: latency-svc-l7xzm [740.762379ms]
Jun 7 13:00:24.412: INFO: Created: latency-svc-6llkg
Jun 7 13:00:24.427: INFO: Got endpoints: latency-svc-6llkg [741.917776ms]
Jun 7 13:00:24.455: INFO: Created: latency-svc-8nm4b
Jun 7 13:00:24.497: INFO: Got endpoints: latency-svc-8nm4b [776.373329ms]
Jun 7 13:00:24.504: INFO: Created: latency-svc-l7bx5
Jun 7 13:00:24.534: INFO: Got endpoints: latency-svc-l7bx5 [752.085508ms]
Jun 7 13:00:24.555: INFO: Created: latency-svc-245fz
Jun 7 13:00:24.585: INFO: Got endpoints: latency-svc-245fz [756.529129ms]
Jun 7 13:00:24.647: INFO: Created: latency-svc-xdgzs
Jun 7 13:00:24.662: INFO: Got endpoints: latency-svc-xdgzs [745.891561ms]
Jun 7 13:00:24.734: INFO: Created: latency-svc-t5dsz
Jun 7 13:00:24.844: INFO: Got endpoints: latency-svc-t5dsz [912.632379ms]
Jun 7 13:00:24.849: INFO: Created: latency-svc-dbnsn
Jun 7 13:00:24.872: INFO: Got endpoints: latency-svc-dbnsn [892.306691ms]
Jun 7 13:00:24.912: INFO: Created: latency-svc-7zgzf
Jun 7 13:00:24.926: INFO: Got endpoints: latency-svc-7zgzf [854.730209ms]
Jun 7 13:00:24.994: INFO: Created: latency-svc-qgfq9
Jun 7 13:00:24.998: INFO: Got endpoints: latency-svc-qgfq9 [877.573751ms]
Jun 7 13:00:25.023: INFO: Created: latency-svc-966g4
Jun 7 13:00:25.035: INFO: Got endpoints: latency-svc-966g4 [886.243593ms]
Jun 7 13:00:25.066: INFO: Created: latency-svc-q9r9r
Jun 7 13:00:25.077: INFO: Got endpoints: latency-svc-q9r9r [849.837162ms]
Jun 7 13:00:25.132: INFO: Created: latency-svc-wc88h
Jun 7 13:00:25.138: INFO: Got endpoints: latency-svc-wc88h [886.331012ms]
Jun 7 13:00:25.158: INFO: Created: latency-svc-prc4k
Jun 7 13:00:25.174: INFO: Got endpoints: latency-svc-prc4k [874.566876ms]
Jun 7 13:00:25.206: INFO: Created: latency-svc-wkt9t
Jun 7 13:00:25.222: INFO: Got endpoints: latency-svc-wkt9t [863.614458ms]
Jun 7 13:00:25.283: INFO: Created: latency-svc-6rlmg
Jun 7 13:00:25.299: INFO: Got endpoints: latency-svc-6rlmg [914.513381ms]
Jun 7 13:00:25.331: INFO: Created: latency-svc-lkkzg
Jun 7 13:00:25.343: INFO: Got endpoints: latency-svc-lkkzg [916.114404ms]
Jun 7 13:00:25.416: INFO: Created: latency-svc-9ml2r
Jun 7 13:00:25.433: INFO: Got endpoints: latency-svc-9ml2r [936.216656ms]
Jun 7 13:00:25.456: INFO: Created: latency-svc-twks6
Jun 7 13:00:25.472: INFO: Got endpoints: latency-svc-twks6 [938.426695ms]
Jun 7 13:00:25.498: INFO: Created: latency-svc-m6zn8
Jun 7 13:00:25.568: INFO: Got endpoints: latency-svc-m6zn8 [982.901041ms]
Jun 7 13:00:25.571: INFO: Created: latency-svc-tbpsm
Jun 7 13:00:25.579: INFO: Got endpoints: latency-svc-tbpsm [917.32374ms]
Jun 7 13:00:25.595: INFO: Created: latency-svc-8b7mt
Jun 7 13:00:25.609: INFO: Got endpoints: latency-svc-8b7mt [764.786115ms]
Jun 7 13:00:25.627: INFO: Created: latency-svc-nbnnf
Jun 7 13:00:25.639: INFO: Got endpoints: latency-svc-nbnnf [766.640602ms]
Jun 7 13:00:25.656: INFO: Created: latency-svc-p49qs
Jun 7 13:00:25.718: INFO: Got endpoints: latency-svc-p49qs [792.223817ms]
Jun 7 13:00:25.737: INFO: Created: latency-svc-vxnmx
Jun 7 13:00:25.754: INFO: Got endpoints: latency-svc-vxnmx [755.732654ms]
Jun 7 13:00:25.774: INFO: Created: latency-svc-qvg6d
Jun 7 13:00:25.790: INFO: Got endpoints: latency-svc-qvg6d [755.115552ms]
Jun 7 13:00:25.812: INFO: Created: latency-svc-rxmnn
Jun 7 13:00:25.856: INFO: Got endpoints: latency-svc-rxmnn [778.373761ms]
Jun 7 13:00:25.884: INFO: Created: latency-svc-gv8mk
Jun 7 13:00:25.899: INFO: Got endpoints: latency-svc-gv8mk [760.843136ms]
Jun 7 13:00:25.917: INFO: Created: latency-svc-mxdvq
Jun 7 13:00:25.929: INFO: Got endpoints: latency-svc-mxdvq [754.500666ms]
Jun 7 13:00:26.007: INFO: Created: latency-svc-djz75
Jun 7 13:00:26.010: INFO: Got endpoints: latency-svc-djz75 [787.463545ms]
Jun 7 13:00:26.038: INFO: Created: latency-svc-lhstv
Jun 7 13:00:26.065: INFO: Got endpoints: latency-svc-lhstv [765.387038ms]
Jun 7 13:00:26.107: INFO: Created: latency-svc-bll96
Jun 7 13:00:26.151: INFO: Got endpoints: latency-svc-bll96 [807.80164ms]
Jun 7 13:00:26.163: INFO: Created: latency-svc-rmf9b
Jun 7 13:00:26.176: INFO: Got endpoints: latency-svc-rmf9b [742.885899ms]
Jun 7 13:00:26.199: INFO: Created: latency-svc-qx9sm
Jun 7 13:00:26.212: INFO: Got endpoints: latency-svc-qx9sm [739.827777ms]
Jun 7 13:00:26.229: INFO: Created: latency-svc-rzkjg
Jun 7 13:00:26.243: INFO: Got endpoints: latency-svc-rzkjg [674.289434ms]
Jun 7 13:00:26.288: INFO: Created: latency-svc-hfmhh
Jun 7 13:00:26.309: INFO: Got endpoints: latency-svc-hfmhh [730.123782ms]
Jun 7 13:00:26.328: INFO: Created: latency-svc-zvlbm
Jun 7 13:00:26.339: INFO: Got endpoints: latency-svc-zvlbm [730.420718ms]
Jun 7 13:00:26.355: INFO: Created: latency-svc-cczrt
Jun 7 13:00:26.370: INFO: Got endpoints: latency-svc-cczrt [730.738777ms]
Jun 7 13:00:26.431: INFO: Created: latency-svc-8tjh9
Jun 7 13:00:26.442: INFO: Got endpoints: latency-svc-8tjh9 [723.234985ms]
Jun 7 13:00:26.463: INFO: Created: latency-svc-jvbtn
Jun 7 13:00:26.478: INFO: Got endpoints: latency-svc-jvbtn [724.107453ms]
Jun 7 13:00:26.502: INFO: Created: latency-svc-9x2xb
Jun 7 13:00:26.515: INFO: Got endpoints: latency-svc-9x2xb [724.672219ms]
Jun 7 13:00:26.588: INFO: Created: latency-svc-lgnhq
Jun 7 13:00:26.607: INFO: Got endpoints: latency-svc-lgnhq [750.576287ms]
Jun 7 13:00:26.607: INFO: Created: latency-svc-ckfgl
Jun 7 13:00:26.623: INFO: Got endpoints: latency-svc-ckfgl [724.778906ms]
Jun 7 13:00:26.643: INFO: Created: latency-svc-m9n7z
Jun 7 13:00:26.660: INFO: Got endpoints: latency-svc-m9n7z [730.791456ms]
Jun 7 13:00:26.736: INFO: Created: latency-svc-6cgrh
Jun 7 13:00:26.744: INFO: Got endpoints: latency-svc-6cgrh [733.877628ms]
Jun 7 13:00:26.781: INFO: Created: latency-svc-tlph5
Jun 7 13:00:26.798: INFO: Got endpoints: latency-svc-tlph5 [733.619923ms]
Jun 7 13:00:26.881: INFO: Created: latency-svc-drr69
Jun 7 13:00:26.885: INFO: Got endpoints: latency-svc-drr69 [733.922561ms]
Jun 7 13:00:26.909: INFO: Created: latency-svc-d8wzb
Jun 7 13:00:26.925: INFO: Got endpoints: latency-svc-d8wzb [748.410304ms]
Jun 7 13:00:26.975: INFO: Created: latency-svc-jpzsk
Jun 7 13:00:27.018: INFO: Got endpoints: latency-svc-jpzsk [805.413277ms]
Jun 7 13:00:27.039: INFO: Created: latency-svc-2lwsx
Jun 7 13:00:27.051: INFO: Got endpoints: latency-svc-2lwsx [808.368579ms]
Jun 7 13:00:27.069: INFO: Created: latency-svc-fkm9p
Jun 7 13:00:27.082: INFO: Got endpoints: latency-svc-fkm9p [772.220512ms]
Jun 7 13:00:27.100: INFO: Created: latency-svc-h59jh
Jun 7 13:00:27.112: INFO: Got endpoints: latency-svc-h59jh [772.51669ms]
Jun 7 13:00:27.180: INFO: Created: latency-svc-k2gvd
Jun 7 13:00:27.197: INFO: Got endpoints: latency-svc-k2gvd [827.571347ms]
Jun 7 13:00:27.227: INFO: Created: latency-svc-8hdmd
Jun 7 13:00:27.239: INFO: Got endpoints: latency-svc-8hdmd [797.080268ms]
Jun 7 13:00:27.347: INFO: Created: latency-svc-xp4vj
Jun 7 13:00:27.350: INFO: Got endpoints: latency-svc-xp4vj [871.477678ms]
Jun 7 13:00:27.426: INFO: Created: latency-svc-jzdkn
Jun 7 13:00:27.437: INFO: Got endpoints: latency-svc-jzdkn [922.014605ms]
Jun 7 13:00:27.485: INFO: Created: latency-svc-6k6m4
Jun 7 13:00:27.491: INFO: Got endpoints: latency-svc-6k6m4 [884.369809ms]
Jun 7 13:00:27.513: INFO: Created: latency-svc-xhcnd
Jun 7 13:00:27.528: INFO: Got endpoints: latency-svc-xhcnd [904.142453ms]
Jun 7 13:00:27.550: INFO: Created: latency-svc-nkmjt
Jun 7 13:00:27.564: INFO: Got endpoints: latency-svc-nkmjt [904.488711ms]
Jun 7 13:00:27.635: INFO: Created: latency-svc-d4dvc
Jun 7 13:00:27.648: INFO: Got endpoints: latency-svc-d4dvc [904.09601ms]
Jun 7 13:00:27.671: INFO: Created: latency-svc-xrngm
Jun 7 13:00:27.684: INFO: Got endpoints: latency-svc-xrngm [886.179963ms]
Jun 7 13:00:27.701: INFO: Created: latency-svc-b9rtg
Jun 7 13:00:27.715: INFO: Got endpoints: latency-svc-b9rtg [830.187819ms]
Jun 7 13:00:27.732: INFO: Created: latency-svc-vt75j
Jun 7 13:00:27.772: INFO: Got endpoints: latency-svc-vt75j [847.600251ms]
Jun 7 13:00:27.783: INFO: Created: latency-svc-gt4xs
Jun 7 13:00:27.813: INFO: Got endpoints: latency-svc-gt4xs [794.66138ms]
Jun 7 13:00:27.837: INFO: Created: latency-svc-2r9tv
Jun 7 13:00:27.946: INFO: Got endpoints: latency-svc-2r9tv [894.436607ms]
Jun 7 13:00:27.990: INFO: Created: latency-svc-qdh6p
Jun 7 13:00:28.048: INFO: Got endpoints: latency-svc-qdh6p [966.050422ms]
Jun 7 13:00:28.055: INFO: Created: latency-svc-4t48k
Jun 7 13:00:28.058: INFO: Got endpoints: latency-svc-4t48k [946.444416ms]
Jun 7 13:00:28.113: INFO: Created: latency-svc-6sv2m
Jun 7 13:00:28.119: INFO: Got endpoints: latency-svc-6sv2m [921.363202ms]
Jun 7 13:00:28.143: INFO: Created: latency-svc-c7w8r
Jun 7 13:00:28.191: INFO: Got endpoints: latency-svc-c7w8r [952.309291ms]
Jun 7 13:00:28.217: INFO: Created: latency-svc-qd99z
Jun 7 13:00:28.234: INFO: Got endpoints: latency-svc-qd99z [883.73271ms]
Jun 7 13:00:28.266: INFO: Created: latency-svc-rmtdl
Jun 7 13:00:28.276: INFO: Got endpoints: latency-svc-rmtdl [838.843319ms]
Jun 7 13:00:28.330: INFO: Created: latency-svc-4db4l
Jun 7 13:00:28.333: INFO: Got endpoints: latency-svc-4db4l [842.162285ms]
Jun 7 13:00:28.359: INFO: Created: latency-svc-8pk88
Jun 7 13:00:28.372: INFO: Got endpoints: latency-svc-8pk88 [844.717141ms]
Jun 7 13:00:28.389: INFO: Created: latency-svc-m2mwm
Jun 7 13:00:28.415: INFO: Got endpoints: latency-svc-m2mwm [850.462909ms]
Jun 7 13:00:28.473: INFO: Created: latency-svc-d2qsd
Jun 7 13:00:28.475: INFO: Got endpoints: latency-svc-d2qsd [827.320647ms]
Jun 7 13:00:28.523: INFO: Created: latency-svc-gmd94
Jun 7 13:00:28.544: INFO: Got endpoints: latency-svc-gmd94 [859.800129ms]
Jun 7 13:00:28.569: INFO: Created: latency-svc-74pft
Jun 7 13:00:28.610: INFO: Got endpoints: latency-svc-74pft [895.510437ms]
Jun 7 13:00:28.619: INFO: Created: latency-svc-mndxt
Jun 7 13:00:28.632: INFO: Got endpoints: latency-svc-mndxt [859.336429ms]
Jun 7 13:00:28.649: INFO: Created: latency-svc-5mprl
Jun 7 13:00:28.668: INFO: Got endpoints: latency-svc-5mprl [855.814671ms]
Jun 7 13:00:28.697: INFO: Created: latency-svc-rzbdp
Jun 7 13:00:28.736: INFO: Got endpoints: latency-svc-rzbdp [790.77278ms]
Jun 7 13:00:28.755: INFO: Created: latency-svc-pls5s
Jun 7 13:00:28.771: INFO: Got endpoints: latency-svc-pls5s [723.186807ms]
Jun 7 13:00:28.810: INFO: Created: latency-svc-pbdfc
Jun 7 13:00:28.825: INFO: Got endpoints: latency-svc-pbdfc [767.141925ms]
Jun 7 13:00:28.878: INFO: Created: latency-svc-6qnnw
Jun 7 13:00:28.898: INFO: Got endpoints: latency-svc-6qnnw [779.198669ms]
Jun 7 13:00:28.966: INFO: Created: latency-svc-78qlk
Jun 7 13:00:29.006: INFO: Got endpoints: latency-svc-78qlk [814.567007ms]
Jun 7 13:00:29.020: INFO: Created: latency-svc-t7vs2
Jun 7 13:00:29.030: INFO: Got endpoints: latency-svc-t7vs2 [796.092096ms]
Jun 7 13:00:29.051: INFO: Created: latency-svc-cpl7b
Jun 7 13:00:29.067: INFO: Got endpoints: latency-svc-cpl7b [791.018686ms]
Jun 7 13:00:29.087: INFO: Created: latency-svc-p4tq8
Jun 7 13:00:29.143: INFO: Got endpoints: latency-svc-p4tq8 [810.234125ms]
Jun 7 13:00:29.169: INFO: Created: latency-svc-29cxt
Jun 7 13:00:29.181: INFO: Got endpoints: latency-svc-29cxt [808.093349ms]
Jun 7 13:00:29.206: INFO: Created: latency-svc-lrfq4
Jun 7 13:00:29.217: INFO: Got endpoints: latency-svc-lrfq4 [802.691558ms]
Jun 7 13:00:29.235: INFO: Created: latency-svc-gvhpp
Jun 7 13:00:29.311: INFO: Got endpoints: latency-svc-gvhpp [836.066798ms]
Jun 7 13:00:29.315: INFO: Created: latency-svc-gw25x
Jun 7 13:00:29.319: INFO: Got endpoints: latency-svc-gw25x [775.069798ms]
Jun 7 13:00:29.345: INFO: Created: latency-svc-qc6gt
Jun 7 13:00:29.356: INFO: Got endpoints: latency-svc-qc6gt [745.740733ms]
Jun 7 13:00:29.391: INFO: Created: latency-svc-z7km5
Jun 7 13:00:29.411: INFO: Got endpoints: latency-svc-z7km5 [778.947298ms]
Jun 7 13:00:29.455: INFO: Created: latency-svc-fqbj5
Jun 7 13:00:29.458: INFO: Got endpoints: latency-svc-fqbj5 [789.489447ms]
Jun 7 13:00:29.495: INFO: Created: latency-svc-z9jcl
Jun 7 13:00:29.507: INFO: Got endpoints: latency-svc-z9jcl [770.657082ms]
Jun 7 13:00:29.525: INFO: Created: latency-svc-jdn2g
Jun 7 13:00:29.549: INFO: Got endpoints: latency-svc-jdn2g [778.117141ms]
Jun 7 13:00:29.599: INFO: Created: latency-svc-sxtdz
Jun 7 13:00:29.602: INFO: Got endpoints: latency-svc-sxtdz [776.229321ms]
Jun 7 13:00:29.643: INFO: Created: latency-svc-hlvnl
Jun 7 13:00:29.652: INFO: Got endpoints: latency-svc-hlvnl [754.05273ms]
Jun 7 13:00:29.669: INFO: Created: latency-svc-mt4hh
Jun 7 13:00:29.682: INFO: Got endpoints: latency-svc-mt4hh [676.359531ms]
Jun 7 13:00:29.737: INFO: Created: latency-svc-r5xml
Jun 7 13:00:29.739: INFO: Got endpoints: latency-svc-r5xml [708.810236ms]
Jun 7 13:00:29.771: INFO: Created: latency-svc-kvnvh
Jun 7 13:00:29.785: INFO: Got endpoints: latency-svc-kvnvh [718.584973ms]
Jun 7 13:00:29.805: INFO: Created: latency-svc-5lt6t
Jun 7 13:00:29.822: INFO: Got endpoints: latency-svc-5lt6t [678.090272ms]
Jun 7 13:00:29.874: INFO: Created: latency-svc-rzzgv
Jun 7 13:00:29.878: INFO: Got endpoints: latency-svc-rzzgv [696.772457ms]
Jun 7 13:00:29.907: INFO: Created: latency-svc-x65t2
Jun 7 13:00:29.918: INFO: Got endpoints: latency-svc-x65t2 [700.041544ms]
Jun 7 13:00:29.940: INFO: Created: latency-svc-77mq9
Jun 7 13:00:29.960: INFO: Got endpoints: latency-svc-77mq9 [648.796241ms]
Jun 7 13:00:30.078: INFO: Created: latency-svc-xbt9s
Jun 7 13:00:30.116: INFO: Got endpoints: latency-svc-xbt9s [796.961479ms]
Jun 7 13:00:30.117: INFO: Created: latency-svc-j7n6v
Jun 7 13:00:30.143: INFO: Got endpoints: latency-svc-j7n6v [786.367756ms]
Jun 7 13:00:30.246: INFO: Created: latency-svc-9fnh7
Jun 7 13:00:30.249: INFO: Got endpoints: latency-svc-9fnh7 [837.964ms]
Jun 7 13:00:30.279: INFO: Created: latency-svc-l6jh7
Jun 7 13:00:30.291: INFO: Got endpoints: latency-svc-l6jh7 [832.954496ms]
Jun 7 13:00:30.310: INFO: Created: latency-svc-547wg
Jun 7 13:00:30.322: INFO: Got endpoints: latency-svc-547wg [814.392818ms]
Jun 7 13:00:30.341: INFO: Created: latency-svc-qt2sw
Jun 7 13:00:30.389: INFO: Got endpoints: latency-svc-qt2sw [839.773589ms]
Jun 7 13:00:30.395: INFO: Created: latency-svc-sgnvl
Jun 7 13:00:30.406: INFO: Got endpoints: latency-svc-sgnvl [804.28711ms]
Jun 7 13:00:30.431: INFO: Created: latency-svc-hxjkv
Jun 7 13:00:30.443: INFO: Got endpoints: latency-svc-hxjkv [790.75589ms]
Jun 7 13:00:30.458: INFO: Created: latency-svc-cplfz
Jun 7 13:00:30.473: INFO: Got endpoints: latency-svc-cplfz [790.770169ms]
Jun 7 13:00:30.539: INFO: Created: latency-svc-5wfv8
Jun 7 13:00:30.561: INFO: Got endpoints: latency-svc-5wfv8 [822.144383ms]
Jun 7 13:00:30.561: INFO: Created: latency-svc-ppz5f
Jun 7 13:00:30.569: INFO: Got endpoints: latency-svc-ppz5f [783.93543ms]
Jun 7 13:00:30.587: INFO: Created: latency-svc-pt9np
Jun 7 13:00:30.612: INFO: Got endpoints: latency-svc-pt9np [790.306645ms]
Jun 7 13:00:30.677: INFO: Created: latency-svc-9wlqx
Jun 7 13:00:30.698: INFO: Got endpoints: latency-svc-9wlqx [820.791258ms]
Jun 7 13:00:30.699: INFO: Created: latency-svc-dk4hx
Jun 7 13:00:30.714: INFO: Got endpoints: latency-svc-dk4hx [796.492515ms]
Jun 7 13:00:30.741: INFO: Created: latency-svc-8h7bb
Jun 7 13:00:30.751: INFO: Got endpoints: latency-svc-8h7bb [790.464739ms]
Jun 7 13:00:30.838: INFO: Created: latency-svc-9wfzx
Jun 7 13:00:30.841: INFO: Got endpoints: latency-svc-9wfzx [724.375068ms]
Jun 7 13:00:30.903: INFO: Created: latency-svc-q5nht
Jun 7 13:00:30.913: INFO: Got endpoints: latency-svc-q5nht [770.633751ms]
Jun 7 13:00:31.007: INFO: Created: latency-svc-9thnw
Jun 7 13:00:31.037: INFO: Got endpoints: latency-svc-9thnw [788.496155ms]
Jun 7 13:00:31.067: INFO: Created: latency-svc-6tkwn
Jun 7 13:00:31.082: INFO: Got endpoints: latency-svc-6tkwn [790.576822ms]
Jun 7 13:00:31.150: INFO: Created: latency-svc-5zdtv
Jun 7 13:00:31.152: INFO: Got endpoints: latency-svc-5zdtv [830.622586ms]
Jun 7 13:00:31.185: INFO: Created: latency-svc-mcnhn
Jun 7 13:00:31.214: INFO: Got endpoints: latency-svc-mcnhn [825.442205ms]
Jun 7 13:00:31.247: INFO: Created: latency-svc-4mcgd
Jun 7 13:00:31.305: INFO: Got endpoints: latency-svc-4mcgd [899.096781ms]
Jun 7 13:00:31.307: INFO: Created: latency-svc-sb984
Jun 7 13:00:31.317: INFO: Got endpoints: latency-svc-sb984 [874.022535ms]
Jun 7 13:00:31.347: INFO: Created: latency-svc-9f9kf
Jun 7 13:00:31.359: INFO: Got endpoints: latency-svc-9f9kf [885.757319ms]
Jun 7 13:00:31.394: INFO: Created: latency-svc-zjk57
Jun 7 13:00:31.472: INFO: Got endpoints: latency-svc-zjk57 [911.434706ms]
Jun 7 13:00:31.499: INFO: Created: latency-svc-px6t7
Jun 7 13:00:31.510: INFO: Got endpoints: latency-svc-px6t7 [940.631134ms]
Jun 7 13:00:31.529: INFO: Created: latency-svc-6bz28
Jun 7 13:00:31.540: INFO: Got endpoints: latency-svc-6bz28 [928.354689ms]
Jun 7 13:00:31.560: INFO: Created: latency-svc-jqs7j
Jun 7 13:00:31.570: INFO: Got endpoints: latency-svc-jqs7j [871.609945ms]
Jun 7 13:00:31.630: INFO: Created: latency-svc-t4vtv
Jun 7 13:00:31.632: INFO: Got endpoints: latency-svc-t4vtv [918.224246ms]
Jun 7 13:00:31.665: INFO: Created: latency-svc-f2rzx
Jun 7 13:00:31.680: INFO: Got endpoints: latency-svc-f2rzx [928.883845ms]
Jun 7 13:00:31.697: INFO: Created: latency-svc-x9llw
Jun 7 13:00:31.710: INFO: Got endpoints: latency-svc-x9llw [868.675152ms]
Jun 7 13:00:31.727: INFO: Created: latency-svc-9qhbc
Jun 7 13:00:31.784: INFO: Got endpoints: latency-svc-9qhbc [870.670182ms]
Jun 7 13:00:31.808: INFO: Created: latency-svc-p29dp
Jun 7 13:00:31.840: INFO: Got endpoints: latency-svc-p29dp [802.830784ms]
Jun 7 13:00:31.881: INFO: Created: latency-svc-qbt9t
Jun 7 13:00:31.922: INFO: Got endpoints: latency-svc-qbt9t [840.122158ms]
Jun 7 13:00:31.930: INFO: Created: latency-svc-2wqhx
Jun 7 13:00:31.944: INFO: Got endpoints: latency-svc-2wqhx [792.167245ms]
Jun 7 13:00:31.979: INFO: Created: latency-svc-jfm5t
Jun 7 13:00:31.999: INFO: Got endpoints: latency-svc-jfm5t [784.472296ms]
Jun 7 13:00:32.066: INFO: Created: latency-svc-4q4g8
Jun 7 13:00:32.077: INFO: Got endpoints: latency-svc-4q4g8 [771.766425ms]
Jun 7 13:00:32.097: INFO: Created: latency-svc-fcqlw
Jun 7 13:00:32.107: INFO: Got endpoints: latency-svc-fcqlw [790.387377ms]
Jun 7 13:00:32.129: INFO: Created: latency-svc-xst9v
Jun 7 13:00:32.144: INFO: Got endpoints: latency-svc-xst9v [784.912692ms]
Jun 7 13:00:32.191: INFO: Created: latency-svc-lkmbg
Jun 7 13:00:32.210: INFO: Got endpoints: latency-svc-lkmbg [737.990816ms]
Jun 7 13:00:32.211: INFO: Created: latency-svc-sqjq9
Jun 7 13:00:32.222: INFO: Got endpoints: latency-svc-sqjq9 [712.320301ms]
Jun 7 13:00:32.252: INFO: Created: latency-svc-dr8kl
Jun 7 13:00:32.277: INFO: Got endpoints: latency-svc-dr8kl [736.602295ms]
Jun 7 13:00:32.325: INFO: Created: latency-svc-rrh7t
Jun 7 13:00:32.343: INFO: Got endpoints: latency-svc-rrh7t [772.498571ms]
Jun 7 13:00:32.363: INFO: Created: latency-svc-7xq5b
Jun 7 13:00:32.374: INFO: Got endpoints: latency-svc-7xq5b [741.851813ms]
Jun 7 13:00:32.393: INFO: Created: latency-svc-99kfx
Jun 7 13:00:32.404: INFO: Got endpoints: latency-svc-99kfx [723.912983ms]
Jun 7 13:00:32.474: INFO: Created: latency-svc-gwfpn
Jun 7 13:00:32.476: INFO: Got endpoints: latency-svc-gwfpn [766.170217ms]
Jun 7 13:00:32.535: INFO: Created: latency-svc-ncpmq
Jun 7 13:00:32.558: INFO: Got endpoints: latency-svc-ncpmq [774.314725ms]
Jun 7 13:00:32.558: INFO: Latencies: [93.427962ms 178.119879ms 295.994591ms 339.741627ms 412.467775ms 442.283414ms 487.771226ms 570.479582ms 640.946806ms 648.796241ms 674.289434ms 676.359531ms 678.090272ms 696.772457ms 700.041544ms 706.461337ms 708.810236ms 712.320301ms 718.584973ms 723.186807ms 723.234985ms 723.912983ms 724.107453ms 724.375068ms 724.672219ms 724.778906ms 730.123782ms 730.420718ms 730.572061ms 730.738777ms 730.790866ms 730.791456ms 733.619923ms 733.877628ms 733.922561ms 736.031954ms 736.602295ms 737.990816ms 738.119249ms 739.827777ms 740.336679ms 740.762379ms 741.851813ms 741.917776ms 742.885899ms 745.740733ms 745.891561ms 748.410304ms 748.833835ms 750.576287ms 752.085508ms 754.05273ms 754.500666ms 755.115552ms 755.732654ms 756.529129ms 756.542693ms 758.597645ms 760.843136ms 764.786115ms 765.387038ms 766.170217ms 766.640602ms 767.141925ms 770.633751ms 770.657082ms 771.766425ms 772.220512ms 772.498571ms 772.51669ms 772.990158ms 774.210334ms 774.314725ms 775.069798ms 776.229321ms 776.373329ms 777.498947ms 778.117141ms 778.373761ms 778.947298ms 779.198669ms 780.040899ms 780.402281ms 783.93543ms 784.472296ms 784.905623ms 784.912692ms 786.367756ms 786.483925ms 787.463545ms 788.496155ms 789.489447ms 790.306645ms 790.387377ms 790.464739ms 790.576822ms 790.720087ms 790.75589ms 790.770169ms 790.77278ms 791.018686ms 791.190619ms 792.167245ms 792.223817ms 794.66138ms 796.092096ms 796.492515ms 796.961479ms 797.080268ms 802.691558ms 802.830784ms 804.28711ms 805.413277ms 807.80164ms 808.093349ms 808.368579ms 810.234125ms 814.392818ms 814.567007ms 814.819994ms 817.287212ms 820.791258ms 822.144383ms 825.442205ms 825.747455ms 827.320647ms 827.571347ms 828.835733ms 830.187819ms 830.622586ms 830.689763ms 832.536949ms 832.954496ms 836.066798ms 837.964ms 838.843319ms 839.773589ms 840.122158ms 840.217768ms 842.162285ms 842.781711ms 844.717141ms 847.600251ms 849.837162ms 850.462909ms 852.011299ms 854.730209ms 855.814671ms 859.336429ms 859.800129ms 862.552973ms 863.614458ms 868.675152ms 870.670182ms 871.477678ms 871.609945ms 873.898234ms 874.022535ms 874.566876ms 877.573751ms 878.359359ms 878.779208ms 882.466242ms 882.817794ms 883.73271ms 884.369809ms 885.757319ms 886.179963ms 886.243593ms 886.331012ms 889.752504ms 892.17056ms 892.234623ms 892.306691ms 894.436607ms 895.510437ms 899.096781ms 904.083966ms 904.09601ms 904.142453ms 904.488711ms 911.434706ms 912.632379ms 913.087523ms 914.513381ms 916.114404ms 917.32374ms 918.224246ms 921.363202ms 922.014605ms 928.354689ms 928.883845ms 936.216656ms 938.426695ms 940.631134ms 941.031289ms 946.444416ms 952.309291ms 966.050422ms 982.901041ms]
Jun 7 13:00:32.559: INFO: 50 %ile: 791.018686ms
Jun 7 13:00:32.559: INFO: 90 %ile: 904.488711ms
Jun 7 13:00:32.559: INFO: 99 %ile: 966.050422ms
Jun 7 13:00:32.559: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:00:32.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8948" for this suite.
Jun 7 13:00:54.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:00:54.699: INFO: namespace svc-latency-8948 deletion completed in 22.126881547s
• [SLOW TEST:38.338 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should not be very high [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:00:54.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-f9632906-d65b-47b3-8cab-64c18f8ee62a
STEP: Creating a pod to test consume secrets
Jun 7 13:00:54.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e" in namespace "projected-9158" to be "success or failure"
Jun 7 13:00:54.808: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.818517ms
Jun 7 13:00:56.893: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099603538s
Jun 7 13:00:58.917: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123727253s
STEP: Saw pod success
Jun 7 13:00:58.917: INFO: Pod "pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e" satisfied condition "success or failure"
Jun 7 13:00:58.930: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e container secret-volume-test:
STEP: delete the pod
Jun 7 13:00:58.955: INFO: Waiting for pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e to disappear
Jun 7 13:00:58.959: INFO: Pod pod-projected-secrets-288232e4-2452-44f3-a159-82b4899b309e no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:00:58.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9158" for this suite.
Jun 7 13:01:04.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:01:05.054: INFO: namespace projected-9158 deletion completed in 6.091197577s
• [SLOW TEST:10.354 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:01:05.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_tcp@PTR;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.226.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.226.175_tcp@PTR;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 7 13:01:11.251: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.254: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.258: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.274: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.280: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:11.301: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:16.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.343: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.348: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.351: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:16.368: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:21.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.312: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.319: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.339: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:21.374: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:26.325: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.328: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.334: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.354: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.363: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:26.383: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:31.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.356: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.362: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:31.385: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:36.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.317: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.323: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.344: INFO: Unable to read jessie_udp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.347: INFO: Unable to read jessie_tcp@dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.352: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local from pod dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1: the server could not find the requested resource (get pods dns-test-e612525e-b815-43e2-972d-bb3f17e538f1)
Jun 7 13:01:36.371: INFO: Lookups using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 failed for: [wheezy_udp@dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service.dns-3614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_udp@dns-test-service.dns-3614.svc.cluster.local jessie_tcp@dns-test-service.dns-3614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3614.svc.cluster.local]
Jun 7 13:01:41.384: INFO: DNS probes using dns-3614/dns-test-e612525e-b815-43e2-972d-bb3f17e538f1 succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:01:42.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3614" for this suite.
Jun 7 13:01:48.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:01:48.426: INFO: namespace dns-3614 deletion completed in 6.229178345s
• [SLOW TEST:43.372 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API
should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:01:48.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jun 7 13:01:48.484: INFO: Waiting up to 5m0s for pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231" in namespace "downward-api-2081" to be "success or failure"
Jun 7 13:01:48.504: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Pending", Reason="", readiness=false. Elapsed: 19.829816ms
Jun 7 13:01:50.508: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023901295s
Jun 7 13:01:52.512: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028602366s
STEP: Saw pod success
Jun 7 13:01:52.513: INFO: Pod "downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231" satisfied condition "success or failure"
Jun 7 13:01:52.516: INFO: Trying to get logs from node iruya-worker pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 container dapi-container:
STEP: delete the pod
Jun 7 13:01:52.572: INFO: Waiting for pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 to disappear
Jun 7 13:01:52.584: INFO: Pod downward-api-47987c62-ebc3-4eef-9d80-b3f9fe9fb231 no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:01:52.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2081" for this suite.
Jun 7 13:01:58.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:01:58.681: INFO: namespace downward-api-2081 deletion completed in 6.094096525s
• [SLOW TEST:10.255 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial]
should rollback without unnecessary restarts [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:01:58.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun 7 13:01:58.783: INFO: Create a RollingUpdate DaemonSet
Jun 7 13:01:58.786: INFO: Check that daemon pods launch on every node of the cluster
Jun 7 13:01:58.789: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:01:58.792: INFO: Number of nodes with available pods: 0
Jun 7 13:01:58.792: INFO: Node iruya-worker is running more than one daemon pod
Jun 7 13:01:59.797: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:01:59.800: INFO: Number of nodes with available pods: 0
Jun 7 13:01:59.800: INFO: Node iruya-worker is running more than one daemon pod
Jun 7 13:02:00.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:00.874: INFO: Number of nodes with available pods: 0
Jun 7 13:02:00.874: INFO: Node iruya-worker is running more than one daemon pod
Jun 7 13:02:02.002: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:02.006: INFO: Number of nodes with available pods: 0
Jun 7 13:02:02.006: INFO: Node iruya-worker is running more than one daemon pod
Jun 7 13:02:02.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:02.803: INFO: Number of nodes with available pods: 1
Jun 7 13:02:02.803: INFO: Node iruya-worker2 is running more than one daemon pod
Jun 7 13:02:03.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:03.802: INFO: Number of nodes with available pods: 2
Jun 7 13:02:03.802: INFO: Number of running nodes: 2, number of available pods: 2
Jun 7 13:02:03.802: INFO: Update the DaemonSet to trigger a rollout
Jun 7 13:02:03.809: INFO: Updating DaemonSet daemon-set
Jun 7 13:02:12.831: INFO: Roll back the DaemonSet before rollout is complete
Jun 7 13:02:12.838: INFO: Updating DaemonSet daemon-set
Jun 7 13:02:12.838: INFO: Make sure DaemonSet rollback is complete
Jun 7 13:02:12.842: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jun 7 13:02:12.842: INFO: Pod daemon-set-5nzfm is not available
Jun 7 13:02:12.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:13.852: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jun 7 13:02:13.852: INFO: Pod daemon-set-5nzfm is not available
Jun 7 13:02:13.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:14.888: INFO: Wrong image for pod: daemon-set-5nzfm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jun 7 13:02:14.888: INFO: Pod daemon-set-5nzfm is not available
Jun 7 13:02:14.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 7 13:02:15.852: INFO: Pod daemon-set-h5b4f is not available
Jun 7 13:02:15.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1827, will wait for the garbage collector to delete the pods
Jun 7 13:02:15.919: INFO: Deleting DaemonSet.extensions daemon-set took: 6.589255ms
Jun 7 13:02:16.219: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.326656ms
Jun 7 13:02:22.223: INFO: Number of nodes with available pods: 0
Jun 7 13:02:22.223: INFO: Number of running nodes: 0, number of available pods: 0
Jun 7 13:02:22.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1827/daemonsets","resourceVersion":"15149110"},"items":null}
Jun 7 13:02:22.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1827/pods","resourceVersion":"15149110"},"items":null}
[AfterEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:02:22.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1827" for this suite.
Jun 7 13:02:28.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:02:28.393: INFO: namespace daemonsets-1827 deletion completed in 6.147047273s
• [SLOW TEST:29.711 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
should rollback without unnecessary restarts [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod
should print the output to logs [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:02:28.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:02:32.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7371" for this suite.
Jun 7 13:03:10.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:03:10.595: INFO: namespace kubelet-test-7371 deletion completed in 38.107606377s
• [SLOW TEST:42.200 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
when scheduling a busybox command in a pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
should print the output to logs [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:03:10.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jun 7 13:03:10.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8756'
Jun 7 13:03:10.938: INFO: stderr: ""
Jun 7 13:03:10.938: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jun 7 13:03:11.967: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:03:11.967: INFO: Found 0 / 1
Jun 7 13:03:12.942: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:03:12.942: INFO: Found 0 / 1
Jun 7 13:03:13.943: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:03:13.943: INFO: Found 0 / 1
Jun 7 13:03:14.961: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:03:14.961: INFO: Found 1 / 1
Jun 7 13:03:14.961: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
Jun 7 13:03:14.964: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:03:14.964: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
STEP: checking for a matching strings
Jun 7 13:03:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756'
Jun 7 13:03:15.075: INFO: stderr: ""
Jun 7 13:03:15.075: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jun 13:03:13.691 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jun 13:03:13.691 # Server started, Redis version 3.2.12\n1:M 07 Jun 13:03:13.691 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jun 7 13:03:15.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --tail=1'
Jun 7 13:03:15.200: INFO: stderr: ""
Jun 7 13:03:15.200: INFO: stdout: "1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jun 7 13:03:15.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --limit-bytes=1'
Jun 7 13:03:15.299: INFO: stderr: ""
Jun 7 13:03:15.300: INFO: stdout: " "
STEP: exposing timestamps
Jun 7 13:03:15.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --tail=1 --timestamps'
Jun 7 13:03:15.397: INFO: stderr: ""
Jun 7 13:03:15.397: INFO: stdout: "2020-06-07T13:03:13.691965821Z 1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jun 7 13:03:17.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --since=1s'
Jun 7 13:03:18.011: INFO: stderr: ""
Jun 7 13:03:18.011: INFO: stdout: ""
Jun 7 13:03:18.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jqwh7 redis-master --namespace=kubectl-8756 --since=24h'
Jun 7 13:03:18.122: INFO: stderr: ""
Jun 7 13:03:18.122: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jun 13:03:13.691 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jun 13:03:13.691 # Server started, Redis version 3.2.12\n1:M 07 Jun 13:03:13.691 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jun 13:03:13.691 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jun 7 13:03:18.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8756'
Jun 7 13:03:18.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 7 13:03:18.226: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jun 7 13:03:18.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8756'
Jun 7 13:03:18.321: INFO: stderr: "No resources found.\n"
Jun 7 13:03:18.321: INFO: stdout: ""
Jun 7 13:03:18.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8756 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 7 13:03:18.410: INFO: stderr: ""
Jun 7 13:03:18.410: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:03:18.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8756" for this suite.
Jun 7 13:03:40.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:03:40.520: INFO: namespace kubectl-8756 deletion completed in 22.105686094s
• [SLOW TEST:29.924 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:03:40.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-rnk4
STEP: Creating a pod to test atomic-volume-subpath
Jun 7 13:03:40.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rnk4" in namespace "subpath-1042" to be "success or failure"
Jun 7 13:03:40.800: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.740146ms
Jun 7 13:03:42.805: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008436244s
Jun 7 13:03:44.809: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 4.012612104s
Jun 7 13:03:46.814: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 6.017375656s
Jun 7 13:03:48.819: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 8.022431471s
Jun 7 13:03:50.823: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 10.027033504s
Jun 7 13:03:52.828: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 12.032097751s
Jun 7 13:03:54.832: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 14.036098107s
Jun 7 13:03:56.837: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 16.040947617s
Jun 7 13:03:58.842: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 18.045261202s
Jun 7 13:04:00.847: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 20.050199148s
Jun 7 13:04:02.851: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Running", Reason="", readiness=true. Elapsed: 22.054380144s
Jun 7 13:04:04.931: INFO: Pod "pod-subpath-test-downwardapi-rnk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135009936s
STEP: Saw pod success
Jun 7 13:04:04.931: INFO: Pod "pod-subpath-test-downwardapi-rnk4" satisfied condition "success or failure"
Jun 7 13:04:04.940: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-rnk4 container test-container-subpath-downwardapi-rnk4:
STEP: delete the pod
Jun 7 13:04:04.967: INFO: Waiting for pod pod-subpath-test-downwardapi-rnk4 to disappear
Jun 7 13:04:04.970: INFO: Pod pod-subpath-test-downwardapi-rnk4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rnk4
Jun 7 13:04:04.970: INFO: Deleting pod "pod-subpath-test-downwardapi-rnk4" in namespace "subpath-1042"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:04:04.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1042" for this suite.
Jun 7 13:04:10.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:04:11.084: INFO: namespace subpath-1042 deletion completed in 6.105845134s
• [SLOW TEST:30.564 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:04:11.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 in namespace container-probe-5990
Jun 7 13:04:15.178: INFO: Started pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 in namespace container-probe-5990
STEP: checking the pod's current state and verifying that restartCount is present
Jun 7 13:04:15.182: INFO: Initial restart count of pod liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 is 0
Jun 7 13:04:33.284: INFO: Restart count of pod container-probe-5990/liveness-9305d3a3-5a88-4ac5-8434-20d47fc65a39 is now 1 (18.102294158s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:04:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5990" for this suite.
Jun 7 13:04:39.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:04:39.394: INFO: namespace container-probe-5990 deletion completed in 6.090210541s
• [SLOW TEST:28.310 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods
should get a host IP [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:04:39.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jun 7 13:04:43.482: INFO: Pod pod-hostip-70edd121-a0b2-4d26-9b7f-de4f44f98fec has hostIP: 172.17.0.5
[AfterEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:04:43.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-420" for this suite.
Jun 7 13:05:05.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:05:05.599: INFO: namespace pods-420 deletion completed in 22.11351153s
• [SLOW TEST:26.204 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should get a host IP [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:05:05.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[]
Jun 7 13:05:05.738: INFO: Get endpoints failed (11.745462ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun 7 13:05:06.742: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[] (1.0157566s elapsed)
STEP: Creating pod pod1 in namespace services-1636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod1:[80]]
Jun 7 13:05:10.807: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod1:[80]] (4.057282288s elapsed)
STEP: Creating pod pod2 in namespace services-1636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod1:[80] pod2:[80]]
Jun 7 13:05:14.910: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod1:[80] pod2:[80]] (4.099118471s elapsed)
STEP: Deleting pod pod1 in namespace services-1636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[pod2:[80]]
Jun 7 13:05:15.936: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[pod2:[80]] (1.022147075s elapsed)
STEP: Deleting pod pod2 in namespace services-1636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1636 to expose endpoints map[]
Jun 7 13:05:16.954: INFO: successfully validated that service endpoint-test2 in namespace services-1636 exposes endpoints map[] (1.012874227s elapsed)
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:05:16.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1636" for this suite.
Jun 7 13:05:23.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:05:23.145: INFO: namespace services-1636 deletion completed in 6.13270216s
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92
• [SLOW TEST:17.546 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period
should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:05:23.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jun 7 13:05:27.302: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jun 7 13:05:32.417: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:05:32.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4319" for this suite.
Jun 7 13:05:38.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:05:38.590: INFO: namespace pods-4319 deletion completed in 6.167259363s
• [SLOW TEST:15.445 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
[k8s.io] Delete Grace Period
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API
should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:05:38.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jun 7 13:05:38.673: INFO: Waiting up to 5m0s for pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c" in namespace "downward-api-5606" to be "success or failure"
Jun 7 13:05:38.699: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.693138ms
Jun 7 13:05:40.703: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030164892s
Jun 7 13:05:42.707: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034202811s
STEP: Saw pod success
Jun 7 13:05:42.707: INFO: Pod "downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c" satisfied condition "success or failure"
Jun 7 13:05:42.710: INFO: Trying to get logs from node iruya-worker2 pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c container dapi-container:
STEP: delete the pod
Jun 7 13:05:42.728: INFO: Waiting for pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c to disappear
Jun 7 13:05:42.733: INFO: Pod downward-api-06dd3b94-1c48-4fda-8c72-9d9b242c5f9c no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:05:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5606" for this suite.
Jun 7 13:05:48.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:05:48.831: INFO: namespace downward-api-5606 deletion completed in 6.093829926s
• [SLOW TEST:10.241 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:05:48.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 7 13:05:48.921: INFO: Waiting up to 5m0s for pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573" in namespace "emptydir-5614" to be "success or failure"
Jun 7 13:05:48.924: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23148ms
Jun 7 13:05:50.993: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072212427s
Jun 7 13:05:52.998: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076889159s
STEP: Saw pod success
Jun 7 13:05:52.998: INFO: Pod "pod-29d95cd0-eead-40f9-b6e1-e5cb61684573" satisfied condition "success or failure"
Jun 7 13:05:53.000: INFO: Trying to get logs from node iruya-worker pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 container test-container:
STEP: delete the pod
Jun 7 13:05:53.091: INFO: Waiting for pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 to disappear
Jun 7 13:05:53.102: INFO: Pod pod-29d95cd0-eead-40f9-b6e1-e5cb61684573 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:05:53.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5614" for this suite.
Jun 7 13:05:59.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:05:59.196: INFO: namespace emptydir-5614 deletion completed in 6.09094215s
• [SLOW TEST:10.365 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:05:59.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 7 13:06:05.378: INFO: DNS probes using dns-test-a3182278-a86c-493f-8ec3-ba1bfaa8e730 succeeded
STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 7 13:06:11.534: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:11.537: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:11.537: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local]
Jun 7 13:06:16.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:16.547: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:16.547: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local]
Jun 7 13:06:21.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:21.568: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:21.568: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local]
Jun 7 13:06:26.549: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:26.553: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:26.553: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local]
Jun 7 13:06:31.543: INFO: File wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:31.546: INFO: File jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local from pod dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 7 13:06:31.546: INFO: Lookups using dns-4343/dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 failed for: [wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local]
Jun 7 13:06:36.587: INFO: DNS probes using dns-test-6d59cd67-f3eb-448a-bbb4-713b3fe9a633 succeeded
STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4343.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4343.svc.cluster.local; sleep 1; done
STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 7 13:06:43.016: INFO: DNS probes using dns-test-10b454b5-cf36-4434-afc3-5f55ba509c3b succeeded
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:06:43.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4343" for this suite.
Jun 7 13:06:49.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:06:49.228: INFO: namespace dns-4343 deletion completed in 6.095254599s
• [SLOW TEST:50.032 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:06:49.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 7 13:06:49.307: INFO: Waiting up to 5m0s for pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c" in namespace "emptydir-6540" to be "success or failure"
Jun 7 13:06:49.315: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922356ms
Jun 7 13:06:51.406: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099057483s
Jun 7 13:06:53.413: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105697445s
STEP: Saw pod success
Jun 7 13:06:53.413: INFO: Pod "pod-af2c2215-762e-4003-8b77-6834ae08d92c" satisfied condition "success or failure"
Jun 7 13:06:53.416: INFO: Trying to get logs from node iruya-worker2 pod pod-af2c2215-762e-4003-8b77-6834ae08d92c container test-container:
STEP: delete the pod
Jun 7 13:06:53.455: INFO: Waiting for pod pod-af2c2215-762e-4003-8b77-6834ae08d92c to disappear
Jun 7 13:06:53.459: INFO: Pod pod-af2c2215-762e-4003-8b77-6834ae08d92c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:06:53.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6540" for this suite.
Jun 7 13:06:59.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:06:59.547: INFO: namespace emptydir-6540 deletion completed in 6.084778232s
• [SLOW TEST:10.319 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment
deployment should delete old replica sets [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:06:59.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun 7 13:06:59.615: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jun 7 13:07:04.620: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 7 13:07:04.620: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jun 7 13:07:04.774: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/deployments/test-cleanup-deployment,UID:8b642ef4-0f20-4ae3-b811-96db351d6b8b,ResourceVersion:15150113,Generation:1,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}
Jun 7 13:07:04.811: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/replicasets/test-cleanup-deployment-55bbcbc84c,UID:b83ff74d-4f9c-491c-a591-991ee08e9f5a,ResourceVersion:15150115,Generation:1,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8b642ef4-0f20-4ae3-b811-96db351d6b8b 0xc0028c1627 0xc0028c1628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 7 13:07:04.811: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jun 7 13:07:04.812: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4549,SelfLink:/apis/apps/v1/namespaces/deployment-4549/replicasets/test-cleanup-controller,UID:3a9c4e9e-a9bc-4f94-87fb-a7d01d8df0ab,ResourceVersion:15150114,Generation:1,CreationTimestamp:2020-06-07 13:06:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8b642ef4-0f20-4ae3-b811-96db351d6b8b 0xc0028c1557 0xc0028c1558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jun 7 13:07:04.961: INFO: Pod "test-cleanup-controller-nkmqp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-nkmqp,GenerateName:test-cleanup-controller-,Namespace:deployment-4549,SelfLink:/api/v1/namespaces/deployment-4549/pods/test-cleanup-controller-nkmqp,UID:0c4ee989-d90a-499f-b50f-a9c13a69ecb7,ResourceVersion:15150107,Generation:0,CreationTimestamp:2020-06-07 13:06:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3a9c4e9e-a9bc-4f94-87fb-a7d01d8df0ab 0xc0028c1ef7 0xc0028c1ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-l6bb8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l6bb8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-l6bb8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028c1f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028c1f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:06:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:06:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.124,StartTime:2020-06-07 13:06:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-07 13:07:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://24b840eaca758c6d2d0addcdc58550400716d01d083e453048abc126740e8942}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jun 7 13:07:04.961: INFO: Pod "test-cleanup-deployment-55bbcbc84c-nb22c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-nb22c,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4549,SelfLink:/api/v1/namespaces/deployment-4549/pods/test-cleanup-deployment-55bbcbc84c-nb22c,UID:ac49a01f-4930-4e23-87ab-087ba9d7f9c3,ResourceVersion:15150121,Generation:0,CreationTimestamp:2020-06-07 13:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c b83ff74d-4f9c-491c-a591-991ee08e9f5a 0xc00180a077 0xc00180a078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-l6bb8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l6bb8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-l6bb8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00180a0f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00180a110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:07:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:07:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4549" for this suite.
Jun 7 13:07:11.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:07:11.122: INFO: namespace deployment-4549 deletion completed in 6.150461884s
• [SLOW TEST:11.575 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
deployment should delete old replica sets [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:07:11.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8539
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 7 13:07:11.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 7 13:07:37.292: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8539 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:07:37.292: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:07:37.335428 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Create stream
I0607 13:07:37.335482 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream added, broadcasting: 1
I0607 13:07:37.409788 6 log.go:172] (0xc000b6b290) Reply frame received for 1
I0607 13:07:37.409841 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Create stream
I0607 13:07:37.409854 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Stream added, broadcasting: 3
I0607 13:07:37.410989 6 log.go:172] (0xc000b6b290) Reply frame received for 3
I0607 13:07:37.411008 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Create stream
I0607 13:07:37.411014 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Stream added, broadcasting: 5
I0607 13:07:37.411943 6 log.go:172] (0xc000b6b290) Reply frame received for 5
I0607 13:07:38.551933 6 log.go:172] (0xc000b6b290) Data frame received for 3
I0607 13:07:38.551971 6 log.go:172] (0xc0025c0460) (3) Data frame handling
I0607 13:07:38.552000 6 log.go:172] (0xc0025c0460) (3) Data frame sent
I0607 13:07:38.552385 6 log.go:172] (0xc000b6b290) Data frame received for 5
I0607 13:07:38.552416 6 log.go:172] (0xc0011f0000) (5) Data frame handling
I0607 13:07:38.552453 6 log.go:172] (0xc000b6b290) Data frame received for 3
I0607 13:07:38.552480 6 log.go:172] (0xc0025c0460) (3) Data frame handling
I0607 13:07:38.554732 6 log.go:172] (0xc000b6b290) Data frame received for 1
I0607 13:07:38.554771 6 log.go:172] (0xc0025c0320) (1) Data frame handling
I0607 13:07:38.554795 6 log.go:172] (0xc0025c0320) (1) Data frame sent
I0607 13:07:38.554812 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream removed, broadcasting: 1
I0607 13:07:38.554830 6 log.go:172] (0xc000b6b290) Go away received
I0607 13:07:38.554987 6 log.go:172] (0xc000b6b290) (0xc0025c0320) Stream removed, broadcasting: 1
I0607 13:07:38.555012 6 log.go:172] (0xc000b6b290) (0xc0025c0460) Stream removed, broadcasting: 3
I0607 13:07:38.555025 6 log.go:172] (0xc000b6b290) (0xc0011f0000) Stream removed, broadcasting: 5
Jun 7 13:07:38.555: INFO: Found all expected endpoints: [netserver-0]
Jun 7 13:07:38.559: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.141 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8539 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:07:38.559: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:07:38.595532 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Create stream
I0607 13:07:38.595564 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream added, broadcasting: 1
I0607 13:07:38.598175 6 log.go:172] (0xc0009e0b00) Reply frame received for 1
I0607 13:07:38.598226 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Create stream
I0607 13:07:38.598248 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Stream added, broadcasting: 3
I0607 13:07:38.599527 6 log.go:172] (0xc0009e0b00) Reply frame received for 3
I0607 13:07:38.599580 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Create stream
I0607 13:07:38.599602 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Stream added, broadcasting: 5
I0607 13:07:38.600568 6 log.go:172] (0xc0009e0b00) Reply frame received for 5
I0607 13:07:39.671127 6 log.go:172] (0xc0009e0b00) Data frame received for 3
I0607 13:07:39.671174 6 log.go:172] (0xc00107a000) (3) Data frame handling
I0607 13:07:39.671201 6 log.go:172] (0xc00107a000) (3) Data frame sent
I0607 13:07:39.671223 6 log.go:172] (0xc0009e0b00) Data frame received for 3
I0607 13:07:39.671316 6 log.go:172] (0xc00107a000) (3) Data frame handling
I0607 13:07:39.671927 6 log.go:172] (0xc0009e0b00) Data frame received for 5
I0607 13:07:39.671963 6 log.go:172] (0xc00107a1e0) (5) Data frame handling
I0607 13:07:39.674415 6 log.go:172] (0xc0009e0b00) Data frame received for 1
I0607 13:07:39.674499 6 log.go:172] (0xc0011f0460) (1) Data frame handling
I0607 13:07:39.674545 6 log.go:172] (0xc0011f0460) (1) Data frame sent
I0607 13:07:39.674578 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream removed, broadcasting: 1
I0607 13:07:39.674608 6 log.go:172] (0xc0009e0b00) Go away received
I0607 13:07:39.674799 6 log.go:172] (0xc0009e0b00) (0xc0011f0460) Stream removed, broadcasting: 1
I0607 13:07:39.674835 6 log.go:172] (0xc0009e0b00) (0xc00107a000) Stream removed, broadcasting: 3
I0607 13:07:39.674851 6 log.go:172] (0xc0009e0b00) (0xc00107a1e0) Stream removed, broadcasting: 5
Jun 7 13:07:39.674: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:07:39.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8539" for this suite.
Jun 7 13:08:03.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:08:03.772: INFO: namespace pod-network-test-8539 deletion completed in 24.09186073s
• [SLOW TEST:52.649 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment
deployment should support rollover [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:08:03.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun 7 13:08:03.882: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jun 7 13:08:08.887: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 7 13:08:08.887: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jun 7 13:08:10.892: INFO: Creating deployment "test-rollover-deployment"
Jun 7 13:08:10.904: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jun 7 13:08:12.911: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jun 7 13:08:12.918: INFO: Ensure that both replica sets have 1 created replica
Jun 7 13:08:12.924: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jun 7 13:08:12.930: INFO: Updating deployment test-rollover-deployment
Jun 7 13:08:12.930: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jun 7 13:08:15.006: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jun 7 13:08:15.011: INFO: Make sure deployment "test-rollover-deployment" is complete
Jun 7 13:08:15.016: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:15.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132093, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:17.023: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:17.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:19.023: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:19.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:21.024: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:21.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:23.025: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:23.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:25.024: INFO: all replica sets need to contain the pod-template-hash label
Jun 7 13:08:25.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132096, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727132090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:08:27.022: INFO:
Jun 7 13:08:27.023: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jun 7 13:08:27.028: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/deployments/test-rollover-deployment,UID:2e006d3a-61d0-4541-bfc8-567e45960965,ResourceVersion:15150459,Generation:2,CreationTimestamp:2020-06-07 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-07 13:08:10 +0000 UTC 2020-06-07 13:08:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-07 13:08:26 +0000 UTC 2020-06-07 13:08:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}
Jun 7 13:08:27.032: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-deployment-854595fc44,UID:0e47c3cd-6f76-4af7-860e-83a1c4e90b1f,ResourceVersion:15150448,Generation:2,CreationTimestamp:2020-06-07 13:08:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc000d5e147 0xc000d5e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jun 7 13:08:27.032: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jun 7 13:08:27.032: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-controller,UID:e665f422-1f3d-4bd1-8d2a-93f1f02ef646,ResourceVersion:15150457,Generation:2,CreationTimestamp:2020-06-07 13:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc002d11e77 0xc002d11e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 7 13:08:27.032: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3883,SelfLink:/apis/apps/v1/namespaces/deployment-3883/replicasets/test-rollover-deployment-9b8b997cf,UID:077cdcdc-1ba8-41f8-8e01-684563c06f4c,ResourceVersion:15150409,Generation:2,CreationTimestamp:2020-06-07 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2e006d3a-61d0-4541-bfc8-567e45960965 0xc000d5e230 0xc000d5e231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 7 13:08:27.035: INFO: Pod "test-rollover-deployment-854595fc44-7dmvm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-7dmvm,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3883,SelfLink:/api/v1/namespaces/deployment-3883/pods/test-rollover-deployment-854595fc44-7dmvm,UID:fe09be12-29c0-43bf-9cc3-2565df513b8c,ResourceVersion:15150424,Generation:0,CreationTimestamp:2020-06-07 13:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0e47c3cd-6f76-4af7-860e-83a1c4e90b1f 0xc001b05bc7 0xc001b05bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wgkpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wgkpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wgkpm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b05c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b05c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.129,StartTime:2020-06-07 13:08:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-07 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://fe3a00547fa43ed4998edfa49a24590a83b64a26c0cd1a09bce026082b41480e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:08:27.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3883" for this suite.
Jun 7 13:08:33.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:08:33.266: INFO: namespace deployment-3883 deletion completed in 6.228047426s
• [SLOW TEST:29.495 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
deployment should support rollover [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:08:33.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun 7 13:08:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2561'
Jun 7 13:08:36.184: INFO: stderr: ""
Jun 7 13:08:36.184: INFO: stdout: "replicationcontroller/redis-master created\n"
Jun 7 13:08:36.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2561'
Jun 7 13:08:36.494: INFO: stderr: ""
Jun 7 13:08:36.494: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jun 7 13:08:37.499: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:08:37.499: INFO: Found 0 / 1
Jun 7 13:08:38.500: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:08:38.500: INFO: Found 0 / 1
Jun 7 13:08:39.499: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:08:39.499: INFO: Found 0 / 1
Jun 7 13:08:40.499: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:08:40.499: INFO: Found 1 / 1
Jun 7 13:08:40.499: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
Jun 7 13:08:40.503: INFO: Selector matched 1 pods for map[app:redis]
Jun 7 13:08:40.503: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 7 13:08:40.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cqw9b --namespace=kubectl-2561'
Jun 7 13:08:40.618: INFO: stderr: ""
Jun 7 13:08:40.618: INFO: stdout: "Name: redis-master-cqw9b\nNamespace: kubectl-2561\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Sun, 07 Jun 2020 13:08:36 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.130\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://4e1bd824d6d8d5d310ed724744bd039ae0bd5fc7c32d7f0193d2919c4d2c8c5f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 07 Jun 2020 13:08:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-224lc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-224lc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-224lc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2561/redis-master-cqw9b to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n"
Jun 7 13:08:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2561'
Jun 7 13:08:40.732: INFO: stderr: ""
Jun 7 13:08:40.732: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2561\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-cqw9b\n"
Jun 7 13:08:40.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2561'
Jun 7 13:08:40.843: INFO: stderr: ""
Jun 7 13:08:40.844: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2561\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.205.146\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.130:6379\nSession Affinity: None\nEvents: \n"
Jun 7 13:08:40.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Jun 7 13:08:40.971: INFO: stderr: ""
Jun 7 13:08:40.971: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 07 Jun 2020 13:08:31 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 83d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 83d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n"
Jun 7 13:08:40.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2561'
Jun 7 13:08:41.076: INFO: stderr: ""
Jun 7 13:08:41.076: INFO: stdout: "Name: kubectl-2561\nLabels: e2e-framework=kubectl\n e2e-run=c47f29a4-0a06-4452-bdd7-01d332ca5e07\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:08:41.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2561" for this suite.
Jun 7 13:09:03.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:09:03.202: INFO: namespace kubectl-2561 deletion completed in 22.122785937s
• [SLOW TEST:29.936 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl describe
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for the cluster [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:09:03.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 7 13:09:09.407: INFO: DNS probes using dns-7852/dns-test-af2103af-ebd5-4347-8dc7-18656c9be5c4 succeeded
STEP: deleting the pod
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:09:09.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7852" for this suite.
Jun 7 13:09:15.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:09:15.709: INFO: namespace dns-7852 deletion completed in 6.093317528s
• [SLOW TEST:12.507 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for the cluster [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should invoke init containers on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:09:15.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jun 7 13:09:15.947: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:09:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1969" for this suite.
Jun 7 13:09:46.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:09:46.429: INFO: namespace init-container-1969 deletion completed in 22.115184927s
• [SLOW TEST:30.719 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should invoke init containers on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:09:46.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6204cb11-b4a7-4c02-b797-ae76ced9d12f
STEP: Creating a pod to test consume secrets
Jun 7 13:09:46.510: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276" in namespace "projected-7451" to be "success or failure"
Jun 7 13:09:46.514: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861488ms
Jun 7 13:09:48.518: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008308523s
Jun 7 13:09:50.522: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012441739s
STEP: Saw pod success
Jun 7 13:09:50.522: INFO: Pod "pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276" satisfied condition "success or failure"
Jun 7 13:09:50.525: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 container projected-secret-volume-test:
STEP: delete the pod
Jun 7 13:09:51.044: INFO: Waiting for pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 to disappear
Jun 7 13:09:51.082: INFO: Pod pod-projected-secrets-d3122e5b-2133-4cc4-8985-82f375c41276 no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:09:51.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7451" for this suite.
Jun 7 13:09:57.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:09:57.544: INFO: namespace projected-7451 deletion completed in 6.457735295s
• [SLOW TEST:11.115 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial]
validates that NodeSelector is respected if matching [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:09:57.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jun 7 13:09:57.611: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 7 13:09:57.618: INFO: Waiting for terminating namespaces to be deleted...
Jun 7 13:09:57.620: INFO:
Logging pods the kubelet thinks is on node iruya-worker before test
Jun 7 13:09:57.625: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.625: INFO: Container kube-proxy ready: true, restart count 0
Jun 7 13:09:57.625: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.625: INFO: Container kindnet-cni ready: true, restart count 2
Jun 7 13:09:57.625: INFO:
Logging pods the kubelet thinks is on node iruya-worker2 before test
Jun 7 13:09:57.655: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.655: INFO: Container kube-proxy ready: true, restart count 0
Jun 7 13:09:57.655: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.655: INFO: Container kindnet-cni ready: true, restart count 2
Jun 7 13:09:57.655: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.655: INFO: Container coredns ready: true, restart count 0
Jun 7 13:09:57.655: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded)
Jun 7 13:09:57.655: INFO: Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-89b0b349-1917-4301-8287-9a5fb1a51a7f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:10:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6062" for this suite.
Jun 7 13:10:23.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:10:23.951: INFO: namespace sched-pred-6062 deletion completed in 18.115351129s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72
• [SLOW TEST:26.408 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
validates that NodeSelector is respected if matching [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:10:23.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e85892e8-e253-4012-be67-82f6daa9847e
STEP: Creating a pod to test consume secrets
Jun 7 13:10:24.157: INFO: Waiting up to 5m0s for pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa" in namespace "secrets-7953" to be "success or failure"
Jun 7 13:10:24.160: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54021ms
Jun 7 13:10:26.219: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061410609s
Jun 7 13:10:28.223: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065847357s
STEP: Saw pod success
Jun 7 13:10:28.223: INFO: Pod "pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa" satisfied condition "success or failure"
Jun 7 13:10:28.226: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa container secret-volume-test:
STEP: delete the pod
Jun 7 13:10:28.247: INFO: Waiting for pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa to disappear
Jun 7 13:10:28.320: INFO: Pod pod-secrets-a2ab51b9-0e6d-4bac-976d-530d3bf0ebaa no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:10:28.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7953" for this suite.
Jun 7 13:10:34.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:10:34.430: INFO: namespace secrets-7953 deletion completed in 6.104901434s
• [SLOW TEST:10.479 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI
should update labels on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:10:34.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jun 7 13:10:39.035: INFO: Successfully updated pod "labelsupdate1476ea7c-7ef8-4f57-bf0d-2b1efb159ca6"
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:10:41.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7957" for this suite.
Jun 7 13:11:03.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:11:03.168: INFO: namespace projected-7957 deletion completed in 22.093175465s
• [SLOW TEST:28.738 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
should update labels on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap
should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:11:03.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-86871ee0-444f-4be1-a576-1efd8b0ef3f3
STEP: Creating a pod to test consume configMaps
Jun 7 13:11:03.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a" in namespace "projected-4058" to be "success or failure"
Jun 7 13:11:03.275: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.350985ms
Jun 7 13:11:05.280: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034121786s
Jun 7 13:11:07.284: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037883185s
STEP: Saw pod success
Jun 7 13:11:07.284: INFO: Pod "pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a" satisfied condition "success or failure"
Jun 7 13:11:07.286: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a container projected-configmap-volume-test:
STEP: delete the pod
Jun 7 13:11:07.407: INFO: Waiting for pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a to disappear
Jun 7 13:11:07.412: INFO: Pod pod-projected-configmaps-81265c81-2037-4cb3-a082-663837ad9e9a no longer exists
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:11:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4058" for this suite.
Jun 7 13:11:13.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:11:13.537: INFO: namespace projected-4058 deletion completed in 6.11897773s
• [SLOW TEST:10.369 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:11:13.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 7 13:11:13.625: INFO: Waiting up to 5m0s for pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608" in namespace "emptydir-8029" to be "success or failure"
Jun 7 13:11:13.635: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Pending", Reason="", readiness=false. Elapsed: 9.717463ms
Jun 7 13:11:15.639: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958666s
Jun 7 13:11:17.643: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017812972s
STEP: Saw pod success
Jun 7 13:11:17.643: INFO: Pod "pod-d7bea0c2-293d-4931-bd18-e6d06d17a608" satisfied condition "success or failure"
Jun 7 13:11:17.645: INFO: Trying to get logs from node iruya-worker2 pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 container test-container:
STEP: delete the pod
Jun 7 13:11:17.661: INFO: Waiting for pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 to disappear
Jun 7 13:11:17.665: INFO: Pod pod-d7bea0c2-293d-4931-bd18-e6d06d17a608 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:11:17.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8029" for this suite.
Jun 7 13:11:23.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:11:23.773: INFO: namespace emptydir-8029 deletion completed in 6.105348375s
• [SLOW TEST:10.236 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:11:23.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0e25059e-2fa6-4b95-af62-87cbeddf5c23
STEP: Creating a pod to test consume secrets
Jun 7 13:11:23.883: INFO: Waiting up to 5m0s for pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b" in namespace "secrets-2217" to be "success or failure"
Jun 7 13:11:23.903: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.253551ms
Jun 7 13:11:25.908: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024805389s
Jun 7 13:11:27.912: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029066786s
STEP: Saw pod success
Jun 7 13:11:27.912: INFO: Pod "pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b" satisfied condition "success or failure"
Jun 7 13:11:27.915: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b container secret-volume-test:
STEP: delete the pod
Jun 7 13:11:27.948: INFO: Waiting for pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b to disappear
Jun 7 13:11:27.953: INFO: Pod pod-secrets-57a7b1f1-1a2b-4826-89d0-aae88ffa210b no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:11:27.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2217" for this suite.
Jun 7 13:11:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:11:34.053: INFO: namespace secrets-2217 deletion completed in 6.097576367s
• [SLOW TEST:10.279 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:11:34.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-45a330c6-563e-43eb-94de-cd5b9fab5eda
STEP: Creating a pod to test consume configMaps
Jun 7 13:11:34.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7" in namespace "projected-7135" to be "success or failure"
Jun 7 13:11:34.167: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.913494ms
Jun 7 13:11:36.192: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030429217s
Jun 7 13:11:38.195: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034042144s
STEP: Saw pod success
Jun 7 13:11:38.196: INFO: Pod "pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7" satisfied condition "success or failure"
Jun 7 13:11:38.198: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 container projected-configmap-volume-test:
STEP: delete the pod
Jun 7 13:11:38.521: INFO: Waiting for pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 to disappear
Jun 7 13:11:38.557: INFO: Pod pod-projected-configmaps-83ec3ddb-c99b-4909-822e-85c580febcd7 no longer exists
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:11:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7135" for this suite.
Jun 7 13:11:44.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:11:44.722: INFO: namespace projected-7135 deletion completed in 6.161702335s
• [SLOW TEST:10.668 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:11:44.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-vx8n
STEP: Creating a pod to test atomic-volume-subpath
Jun 7 13:11:44.848: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vx8n" in namespace "subpath-9374" to be "success or failure"
Jun 7 13:11:44.856: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.36031ms
Jun 7 13:11:46.861: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013230945s
Jun 7 13:11:48.866: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 4.018140736s
Jun 7 13:11:50.871: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 6.022819358s
Jun 7 13:11:52.875: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 8.027220647s
Jun 7 13:11:54.880: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 10.031986441s
Jun 7 13:11:56.884: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 12.036294021s
Jun 7 13:11:58.888: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 14.040011304s
Jun 7 13:12:00.893: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 16.045345599s
Jun 7 13:12:02.898: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 18.049871972s
Jun 7 13:12:04.903: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 20.054585518s
Jun 7 13:12:06.907: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Running", Reason="", readiness=true. Elapsed: 22.059122723s
Jun 7 13:12:08.912: INFO: Pod "pod-subpath-test-configmap-vx8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063974117s
STEP: Saw pod success
Jun 7 13:12:08.912: INFO: Pod "pod-subpath-test-configmap-vx8n" satisfied condition "success or failure"
Jun 7 13:12:08.915: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-vx8n container test-container-subpath-configmap-vx8n:
STEP: delete the pod
Jun 7 13:12:08.983: INFO: Waiting for pod pod-subpath-test-configmap-vx8n to disappear
Jun 7 13:12:08.989: INFO: Pod pod-subpath-test-configmap-vx8n no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vx8n
Jun 7 13:12:08.989: INFO: Deleting pod "pod-subpath-test-configmap-vx8n" in namespace "subpath-9374"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:12:08.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9374" for this suite.
Jun 7 13:12:15.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:12:15.088: INFO: namespace subpath-9374 deletion completed in 6.092473451s
• [SLOW TEST:30.364 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial]
should ensure that all pods are removed when a namespace is deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:12:15.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:12:41.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5138" for this suite.
Jun 7 13:12:47.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:12:47.468: INFO: namespace namespaces-5138 deletion completed in 6.086039225s
STEP: Destroying namespace "nsdeletetest-8702" for this suite.
Jun 7 13:12:47.470: INFO: Namespace nsdeletetest-8702 was already deleted
STEP: Destroying namespace "nsdeletetest-9657" for this suite.
Jun 7 13:12:53.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:12:53.575: INFO: namespace nsdeletetest-9657 deletion completed in 6.105888105s
• [SLOW TEST:38.486 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should ensure that all pods are removed when a namespace is deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:12:53.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-fc8cf3a1-911e-473c-bec4-a8f821e9e34d
STEP: Creating secret with name s-test-opt-upd-98a3f903-13bd-4cdc-802a-3c299c62b3e9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fc8cf3a1-911e-473c-bec4-a8f821e9e34d
STEP: Updating secret s-test-opt-upd-98a3f903-13bd-4cdc-802a-3c299c62b3e9
STEP: Creating secret with name s-test-opt-create-a1d834a7-8971-45c5-9db7-75f77e64d9b8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:13:01.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-869" for this suite.
Jun 7 13:13:23.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:13:23.902: INFO: namespace secrets-869 deletion completed in 22.12511957s
• [SLOW TEST:30.326 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop
should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:13:23.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-14
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-14
STEP: Deleting pre-stop pod
Jun 7 13:13:37.010: INFO: Saw: {
"Hostname": "server",
"Sent": null,
"Received": {
"prestop": 1
},
"Errors": null,
"Log": [
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:13:37.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-14" for this suite.
Jun 7 13:14:15.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:14:15.160: INFO: namespace prestop-14 deletion completed in 38.131781175s
• [SLOW TEST:51.257 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap
binary data should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:14:15.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-df3c1850-629c-404c-9118-fa5267ef94cc
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:14:21.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9944" for this suite.
Jun 7 13:14:43.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:14:43.422: INFO: namespace configmap-9944 deletion completed in 22.106409992s
• [SLOW TEST:28.262 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
binary data should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:14:43.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7394
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 7 13:14:43.454: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 7 13:15:09.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=udp&host=10.244.1.151&port=8081&tries=1'] Namespace:pod-network-test-7394 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:15:09.622: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:15:09.647922 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Create stream
I0607 13:15:09.647988 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream added, broadcasting: 1
I0607 13:15:09.651076 6 log.go:172] (0xc001bfc8f0) Reply frame received for 1
I0607 13:15:09.651110 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Create stream
I0607 13:15:09.651116 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Stream added, broadcasting: 3
I0607 13:15:09.651979 6 log.go:172] (0xc001bfc8f0) Reply frame received for 3
I0607 13:15:09.652012 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Create stream
I0607 13:15:09.652029 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Stream added, broadcasting: 5
I0607 13:15:09.652942 6 log.go:172] (0xc001bfc8f0) Reply frame received for 5
I0607 13:15:09.780942 6 log.go:172] (0xc001bfc8f0) Data frame received for 3
I0607 13:15:09.780986 6 log.go:172] (0xc0021d4d20) (3) Data frame handling
I0607 13:15:09.781009 6 log.go:172] (0xc0021d4d20) (3) Data frame sent
I0607 13:15:09.781907 6 log.go:172] (0xc001bfc8f0) Data frame received for 3
I0607 13:15:09.781942 6 log.go:172] (0xc0021d4d20) (3) Data frame handling
I0607 13:15:09.782065 6 log.go:172] (0xc001bfc8f0) Data frame received for 5
I0607 13:15:09.782096 6 log.go:172] (0xc0025c00a0) (5) Data frame handling
I0607 13:15:09.783935 6 log.go:172] (0xc001bfc8f0) Data frame received for 1
I0607 13:15:09.783957 6 log.go:172] (0xc0025c0000) (1) Data frame handling
I0607 13:15:09.783969 6 log.go:172] (0xc0025c0000) (1) Data frame sent
I0607 13:15:09.783992 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream removed, broadcasting: 1
I0607 13:15:09.784022 6 log.go:172] (0xc001bfc8f0) Go away received
I0607 13:15:09.784111 6 log.go:172] (0xc001bfc8f0) (0xc0025c0000) Stream removed, broadcasting: 1
I0607 13:15:09.784128 6 log.go:172] (0xc001bfc8f0) (0xc0021d4d20) Stream removed, broadcasting: 3
I0607 13:15:09.784152 6 log.go:172] (0xc001bfc8f0) (0xc0025c00a0) Stream removed, broadcasting: 5
Jun 7 13:15:09.784: INFO: Waiting for endpoints: map[]
Jun 7 13:15:09.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=udp&host=10.244.2.140&port=8081&tries=1'] Namespace:pod-network-test-7394 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:15:09.788: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:15:09.823009 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Create stream
I0607 13:15:09.823039 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream added, broadcasting: 1
I0607 13:15:09.826843 6 log.go:172] (0xc0023da2c0) Reply frame received for 1
I0607 13:15:09.826883 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Create stream
I0607 13:15:09.826889 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Stream added, broadcasting: 3
I0607 13:15:09.827868 6 log.go:172] (0xc0023da2c0) Reply frame received for 3
I0607 13:15:09.827917 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Create stream
I0607 13:15:09.827932 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Stream added, broadcasting: 5
I0607 13:15:09.828968 6 log.go:172] (0xc0023da2c0) Reply frame received for 5
I0607 13:15:09.899647 6 log.go:172] (0xc0023da2c0) Data frame received for 3
I0607 13:15:09.899679 6 log.go:172] (0xc0025c0140) (3) Data frame handling
I0607 13:15:09.899695 6 log.go:172] (0xc0025c0140) (3) Data frame sent
I0607 13:15:09.900224 6 log.go:172] (0xc0023da2c0) Data frame received for 5
I0607 13:15:09.900246 6 log.go:172] (0xc0025c01e0) (5) Data frame handling
I0607 13:15:09.900270 6 log.go:172] (0xc0023da2c0) Data frame received for 3
I0607 13:15:09.900294 6 log.go:172] (0xc0025c0140) (3) Data frame handling
I0607 13:15:09.902184 6 log.go:172] (0xc0023da2c0) Data frame received for 1
I0607 13:15:09.902197 6 log.go:172] (0xc001ed14a0) (1) Data frame handling
I0607 13:15:09.902208 6 log.go:172] (0xc001ed14a0) (1) Data frame sent
I0607 13:15:09.902216 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream removed, broadcasting: 1
I0607 13:15:09.902231 6 log.go:172] (0xc0023da2c0) Go away received
I0607 13:15:09.902412 6 log.go:172] (0xc0023da2c0) (0xc001ed14a0) Stream removed, broadcasting: 1
I0607 13:15:09.902480 6 log.go:172] (0xc0023da2c0) (0xc0025c0140) Stream removed, broadcasting: 3
I0607 13:15:09.902500 6 log.go:172] (0xc0023da2c0) (0xc0025c01e0) Stream removed, broadcasting: 5
Jun 7 13:15:09.902: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:15:09.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7394" for this suite.
Jun 7 13:15:33.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:15:34.033: INFO: namespace pod-network-test-7394 deletion completed in 24.126954206s
• [SLOW TEST:50.611 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:15:34.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-79836fea-aaa6-47b0-b2e3-38180aa91062
STEP: Creating a pod to test consume secrets
Jun 7 13:15:34.097: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7" in namespace "projected-2638" to be "success or failure"
Jun 7 13:15:34.101: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.41626ms
Jun 7 13:15:36.105: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729245s
Jun 7 13:15:38.109: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011649328s
STEP: Saw pod success
Jun 7 13:15:38.109: INFO: Pod "pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7" satisfied condition "success or failure"
Jun 7 13:15:38.112: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 container projected-secret-volume-test:
STEP: delete the pod
Jun 7 13:15:38.147: INFO: Waiting for pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 to disappear
Jun 7 13:15:38.155: INFO: Pod pod-projected-secrets-ec52f622-65bc-44c5-abfa-310fbc912dd7 no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:15:38.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2638" for this suite.
Jun 7 13:15:44.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:15:44.546: INFO: namespace projected-2638 deletion completed in 6.387641443s
• [SLOW TEST:10.511 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:15:44.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jun 7 13:15:48.794: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-201f32eb-2191-42de-aaeb-f52d438b4e17,GenerateName:,Namespace:events-1177,SelfLink:/api/v1/namespaces/events-1177/pods/send-events-201f32eb-2191-42de-aaeb-f52d438b4e17,UID:4fea7e7d-1e53-4dcf-9094-3cf011fe64de,ResourceVersion:15151985,Generation:0,CreationTimestamp:2020-06-07 13:15:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 774339735,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ltc5c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ltc5c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ltc5c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:15:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.153,StartTime:2020-06-07 13:15:44 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-07 13:15:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://86f3e0b8e60952b68748ad618a6bcf91495de971c55f8414ede3421a13ceae20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
STEP: checking for scheduler event about the pod
Jun 7 13:15:50.799: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jun 7 13:15:52.803: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:15:52.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1177" for this suite.
Jun 7 13:16:34.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:16:34.929: INFO: namespace events-1177 deletion completed in 42.108074877s
• [SLOW TEST:50.383 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:16:34.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-6nng
STEP: Creating a pod to test atomic-volume-subpath
Jun 7 13:16:35.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6nng" in namespace "subpath-9612" to be "success or failure"
Jun 7 13:16:35.024: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Pending", Reason="", readiness=false. Elapsed: 3.892039ms
Jun 7 13:16:37.029: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008938453s
Jun 7 13:16:39.034: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 4.013592635s
Jun 7 13:16:41.038: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 6.018382338s
Jun 7 13:16:43.043: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 8.023162241s
Jun 7 13:16:45.047: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 10.027167028s
Jun 7 13:16:47.051: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 12.03111321s
Jun 7 13:16:49.055: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 14.035561219s
Jun 7 13:16:51.060: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 16.039945226s
Jun 7 13:16:53.066: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 18.045696111s
Jun 7 13:16:55.070: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 20.049906999s
Jun 7 13:16:57.074: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Running", Reason="", readiness=true. Elapsed: 22.053763287s
Jun 7 13:16:59.196: INFO: Pod "pod-subpath-test-configmap-6nng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.175808078s
STEP: Saw pod success
Jun 7 13:16:59.196: INFO: Pod "pod-subpath-test-configmap-6nng" satisfied condition "success or failure"
Jun 7 13:16:59.199: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-6nng container test-container-subpath-configmap-6nng:
STEP: delete the pod
Jun 7 13:16:59.218: INFO: Waiting for pod pod-subpath-test-configmap-6nng to disappear
Jun 7 13:16:59.228: INFO: Pod pod-subpath-test-configmap-6nng no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6nng
Jun 7 13:16:59.228: INFO: Deleting pod "pod-subpath-test-configmap-6nng" in namespace "subpath-9612"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:16:59.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9612" for this suite.
Jun 7 13:17:05.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:17:05.350: INFO: namespace subpath-9612 deletion completed in 6.095755045s
• [SLOW TEST:30.421 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod
should create a pod from an image when restart is Never [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:17:05.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 7 13:17:05.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4476'
Jun 7 13:17:05.499: INFO: stderr: ""
Jun 7 13:17:05.499: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jun 7 13:17:05.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4476'
Jun 7 13:17:12.173: INFO: stderr: ""
Jun 7 13:17:12.173: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:17:12.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4476" for this suite.
Jun 7 13:17:18.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:17:18.274: INFO: namespace kubectl-4476 deletion completed in 6.098041197s
• [SLOW TEST:12.924 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl run pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should create a pod from an image when restart is Never [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts
should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:17:18.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jun 7 13:17:28.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.402: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.439268 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Create stream
I0607 13:17:28.439297 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream added, broadcasting: 1
I0607 13:17:28.441467 6 log.go:172] (0xc002b6c9a0) Reply frame received for 1
I0607 13:17:28.441603 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Create stream
I0607 13:17:28.441618 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Stream added, broadcasting: 3
I0607 13:17:28.442826 6 log.go:172] (0xc002b6c9a0) Reply frame received for 3
I0607 13:17:28.442892 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Create stream
I0607 13:17:28.442919 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Stream added, broadcasting: 5
I0607 13:17:28.443975 6 log.go:172] (0xc002b6c9a0) Reply frame received for 5
I0607 13:17:28.504970 6 log.go:172] (0xc002b6c9a0) Data frame received for 5
I0607 13:17:28.504999 6 log.go:172] (0xc002ac8dc0) (5) Data frame handling
I0607 13:17:28.505033 6 log.go:172] (0xc002b6c9a0) Data frame received for 3
I0607 13:17:28.505076 6 log.go:172] (0xc0017b4140) (3) Data frame handling
I0607 13:17:28.505311 6 log.go:172] (0xc0017b4140) (3) Data frame sent
I0607 13:17:28.505342 6 log.go:172] (0xc002b6c9a0) Data frame received for 3
I0607 13:17:28.505358 6 log.go:172] (0xc0017b4140) (3) Data frame handling
I0607 13:17:28.506754 6 log.go:172] (0xc002b6c9a0) Data frame received for 1
I0607 13:17:28.506770 6 log.go:172] (0xc002ac8d20) (1) Data frame handling
I0607 13:17:28.506779 6 log.go:172] (0xc002ac8d20) (1) Data frame sent
I0607 13:17:28.506792 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream removed, broadcasting: 1
I0607 13:17:28.506874 6 log.go:172] (0xc002b6c9a0) (0xc002ac8d20) Stream removed, broadcasting: 1
I0607 13:17:28.506886 6 log.go:172] (0xc002b6c9a0) (0xc0017b4140) Stream removed, broadcasting: 3
I0607 13:17:28.506945 6 log.go:172] (0xc002b6c9a0) Go away received
I0607 13:17:28.507053 6 log.go:172] (0xc002b6c9a0) (0xc002ac8dc0) Stream removed, broadcasting: 5
Jun 7 13:17:28.507: INFO: Exec stderr: ""
Jun 7 13:17:28.507: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.507: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.540131 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Create stream
I0607 13:17:28.540163 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream added, broadcasting: 1
I0607 13:17:28.542182 6 log.go:172] (0xc002b6dad0) Reply frame received for 1
I0607 13:17:28.542251 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Create stream
I0607 13:17:28.542279 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Stream added, broadcasting: 3
I0607 13:17:28.543414 6 log.go:172] (0xc002b6dad0) Reply frame received for 3
I0607 13:17:28.543448 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Create stream
I0607 13:17:28.543467 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Stream added, broadcasting: 5
I0607 13:17:28.544489 6 log.go:172] (0xc002b6dad0) Reply frame received for 5
I0607 13:17:28.604105 6 log.go:172] (0xc002b6dad0) Data frame received for 5
I0607 13:17:28.604218 6 log.go:172] (0xc002ac9180) (5) Data frame handling
I0607 13:17:28.604256 6 log.go:172] (0xc002b6dad0) Data frame received for 3
I0607 13:17:28.604285 6 log.go:172] (0xc001acb360) (3) Data frame handling
I0607 13:17:28.604316 6 log.go:172] (0xc001acb360) (3) Data frame sent
I0607 13:17:28.604334 6 log.go:172] (0xc002b6dad0) Data frame received for 3
I0607 13:17:28.604347 6 log.go:172] (0xc001acb360) (3) Data frame handling
I0607 13:17:28.606307 6 log.go:172] (0xc002b6dad0) Data frame received for 1
I0607 13:17:28.606347 6 log.go:172] (0xc002ac90e0) (1) Data frame handling
I0607 13:17:28.606367 6 log.go:172] (0xc002ac90e0) (1) Data frame sent
I0607 13:17:28.606388 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream removed, broadcasting: 1
I0607 13:17:28.606415 6 log.go:172] (0xc002b6dad0) Go away received
I0607 13:17:28.606603 6 log.go:172] (0xc002b6dad0) (0xc002ac90e0) Stream removed, broadcasting: 1
I0607 13:17:28.606644 6 log.go:172] (0xc002b6dad0) (0xc001acb360) Stream removed, broadcasting: 3
I0607 13:17:28.606658 6 log.go:172] (0xc002b6dad0) (0xc002ac9180) Stream removed, broadcasting: 5
Jun 7 13:17:28.606: INFO: Exec stderr: ""
Jun 7 13:17:28.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.606: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.640543 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Create stream
I0607 13:17:28.640569 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream added, broadcasting: 1
I0607 13:17:28.646496 6 log.go:172] (0xc0025ce580) Reply frame received for 1
I0607 13:17:28.646558 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Create stream
I0607 13:17:28.646576 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Stream added, broadcasting: 3
I0607 13:17:28.648214 6 log.go:172] (0xc0025ce580) Reply frame received for 3
I0607 13:17:28.648255 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Create stream
I0607 13:17:28.648270 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Stream added, broadcasting: 5
I0607 13:17:28.650035 6 log.go:172] (0xc0025ce580) Reply frame received for 5
I0607 13:17:28.723871 6 log.go:172] (0xc0025ce580) Data frame received for 3
I0607 13:17:28.723921 6 log.go:172] (0xc002ac9540) (3) Data frame handling
I0607 13:17:28.723952 6 log.go:172] (0xc002ac9540) (3) Data frame sent
I0607 13:17:28.724226 6 log.go:172] (0xc0025ce580) Data frame received for 5
I0607 13:17:28.724256 6 log.go:172] (0xc002ac95e0) (5) Data frame handling
I0607 13:17:28.724289 6 log.go:172] (0xc0025ce580) Data frame received for 3
I0607 13:17:28.724307 6 log.go:172] (0xc002ac9540) (3) Data frame handling
I0607 13:17:28.725516 6 log.go:172] (0xc0025ce580) Data frame received for 1
I0607 13:17:28.725542 6 log.go:172] (0xc002ac94a0) (1) Data frame handling
I0607 13:17:28.725579 6 log.go:172] (0xc002ac94a0) (1) Data frame sent
I0607 13:17:28.725742 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream removed, broadcasting: 1
I0607 13:17:28.725856 6 log.go:172] (0xc0025ce580) (0xc002ac94a0) Stream removed, broadcasting: 1
I0607 13:17:28.725877 6 log.go:172] (0xc0025ce580) (0xc002ac9540) Stream removed, broadcasting: 3
I0607 13:17:28.725891 6 log.go:172] (0xc0025ce580) (0xc002ac95e0) Stream removed, broadcasting: 5
Jun 7 13:17:28.725: INFO: Exec stderr: ""
I0607 13:17:28.725924 6 log.go:172] (0xc0025ce580) Go away received
Jun 7 13:17:28.725: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.726: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.791115 6 log.go:172] (0xc002ac0210) (0xc001acb680) Create stream
I0607 13:17:28.791147 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream added, broadcasting: 1
I0607 13:17:28.793749 6 log.go:172] (0xc002ac0210) Reply frame received for 1
I0607 13:17:28.793819 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Create stream
I0607 13:17:28.793848 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Stream added, broadcasting: 3
I0607 13:17:28.794989 6 log.go:172] (0xc002ac0210) Reply frame received for 3
I0607 13:17:28.795017 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Create stream
I0607 13:17:28.795026 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Stream added, broadcasting: 5
I0607 13:17:28.796057 6 log.go:172] (0xc002ac0210) Reply frame received for 5
I0607 13:17:28.856508 6 log.go:172] (0xc002ac0210) Data frame received for 5
I0607 13:17:28.856679 6 log.go:172] (0xc0025c1860) (5) Data frame handling
I0607 13:17:28.856765 6 log.go:172] (0xc002ac0210) Data frame received for 3
I0607 13:17:28.856794 6 log.go:172] (0xc001d7fe00) (3) Data frame handling
I0607 13:17:28.856973 6 log.go:172] (0xc001d7fe00) (3) Data frame sent
I0607 13:17:28.856986 6 log.go:172] (0xc002ac0210) Data frame received for 3
I0607 13:17:28.856995 6 log.go:172] (0xc001d7fe00) (3) Data frame handling
I0607 13:17:28.858075 6 log.go:172] (0xc002ac0210) Data frame received for 1
I0607 13:17:28.858095 6 log.go:172] (0xc001acb680) (1) Data frame handling
I0607 13:17:28.858109 6 log.go:172] (0xc001acb680) (1) Data frame sent
I0607 13:17:28.858122 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream removed, broadcasting: 1
I0607 13:17:28.858131 6 log.go:172] (0xc002ac0210) Go away received
I0607 13:17:28.858216 6 log.go:172] (0xc002ac0210) (0xc001acb680) Stream removed, broadcasting: 1
I0607 13:17:28.858234 6 log.go:172] (0xc002ac0210) (0xc001d7fe00) Stream removed, broadcasting: 3
I0607 13:17:28.858243 6 log.go:172] (0xc002ac0210) (0xc0025c1860) Stream removed, broadcasting: 5
Jun 7 13:17:28.858: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jun 7 13:17:28.858: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.858: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.886098 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Create stream
I0607 13:17:28.886138 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream added, broadcasting: 1
I0607 13:17:28.888363 6 log.go:172] (0xc0025cf4a0) Reply frame received for 1
I0607 13:17:28.888397 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Create stream
I0607 13:17:28.888409 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Stream added, broadcasting: 3
I0607 13:17:28.889412 6 log.go:172] (0xc0025cf4a0) Reply frame received for 3
I0607 13:17:28.889447 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Create stream
I0607 13:17:28.889459 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Stream added, broadcasting: 5
I0607 13:17:28.890153 6 log.go:172] (0xc0025cf4a0) Reply frame received for 5
I0607 13:17:28.961808 6 log.go:172] (0xc0025cf4a0) Data frame received for 3
I0607 13:17:28.961844 6 log.go:172] (0xc001d7fea0) (3) Data frame handling
I0607 13:17:28.961865 6 log.go:172] (0xc001d7fea0) (3) Data frame sent
I0607 13:17:28.961879 6 log.go:172] (0xc0025cf4a0) Data frame received for 3
I0607 13:17:28.961891 6 log.go:172] (0xc001d7fea0) (3) Data frame handling
I0607 13:17:28.961947 6 log.go:172] (0xc0025cf4a0) Data frame received for 5
I0607 13:17:28.961966 6 log.go:172] (0xc002ac99a0) (5) Data frame handling
I0607 13:17:28.963450 6 log.go:172] (0xc0025cf4a0) Data frame received for 1
I0607 13:17:28.963473 6 log.go:172] (0xc002ac9900) (1) Data frame handling
I0607 13:17:28.963506 6 log.go:172] (0xc002ac9900) (1) Data frame sent
I0607 13:17:28.963540 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream removed, broadcasting: 1
I0607 13:17:28.963561 6 log.go:172] (0xc0025cf4a0) Go away received
I0607 13:17:28.963774 6 log.go:172] (0xc0025cf4a0) (0xc002ac9900) Stream removed, broadcasting: 1
I0607 13:17:28.963797 6 log.go:172] (0xc0025cf4a0) (0xc001d7fea0) Stream removed, broadcasting: 3
I0607 13:17:28.963809 6 log.go:172] (0xc0025cf4a0) (0xc002ac99a0) Stream removed, broadcasting: 5
Jun 7 13:17:28.963: INFO: Exec stderr: ""
Jun 7 13:17:28.963: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:28.963: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:28.996957 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Create stream
I0607 13:17:28.996989 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream added, broadcasting: 1
I0607 13:17:28.999423 6 log.go:172] (0xc002dd7970) Reply frame received for 1
I0607 13:17:28.999469 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Create stream
I0607 13:17:28.999485 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Stream added, broadcasting: 3
I0607 13:17:29.000427 6 log.go:172] (0xc002dd7970) Reply frame received for 3
I0607 13:17:29.000453 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Create stream
I0607 13:17:29.000467 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Stream added, broadcasting: 5
I0607 13:17:29.001735 6 log.go:172] (0xc002dd7970) Reply frame received for 5
I0607 13:17:29.070071 6 log.go:172] (0xc002dd7970) Data frame received for 5
I0607 13:17:29.070105 6 log.go:172] (0xc0025c19a0) (5) Data frame handling
I0607 13:17:29.070285 6 log.go:172] (0xc002dd7970) Data frame received for 3
I0607 13:17:29.070313 6 log.go:172] (0xc0025c1900) (3) Data frame handling
I0607 13:17:29.070340 6 log.go:172] (0xc0025c1900) (3) Data frame sent
I0607 13:17:29.070356 6 log.go:172] (0xc002dd7970) Data frame received for 3
I0607 13:17:29.070371 6 log.go:172] (0xc0025c1900) (3) Data frame handling
I0607 13:17:29.071531 6 log.go:172] (0xc002dd7970) Data frame received for 1
I0607 13:17:29.071584 6 log.go:172] (0xc001f141e0) (1) Data frame handling
I0607 13:17:29.071622 6 log.go:172] (0xc001f141e0) (1) Data frame sent
I0607 13:17:29.071645 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream removed, broadcasting: 1
I0607 13:17:29.071664 6 log.go:172] (0xc002dd7970) Go away received
I0607 13:17:29.071914 6 log.go:172] (0xc002dd7970) (0xc001f141e0) Stream removed, broadcasting: 1
I0607 13:17:29.071946 6 log.go:172] (0xc002dd7970) (0xc0025c1900) Stream removed, broadcasting: 3
I0607 13:17:29.071969 6 log.go:172] (0xc002dd7970) (0xc0025c19a0) Stream removed, broadcasting: 5
Jun 7 13:17:29.071: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jun 7 13:17:29.072: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:29.072: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:29.107427 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Create stream
I0607 13:17:29.107452 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream added, broadcasting: 1
I0607 13:17:29.109782 6 log.go:172] (0xc0021de2c0) Reply frame received for 1
I0607 13:17:29.109844 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Create stream
I0607 13:17:29.109910 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Stream added, broadcasting: 3
I0607 13:17:29.110900 6 log.go:172] (0xc0021de2c0) Reply frame received for 3
I0607 13:17:29.110938 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Create stream
I0607 13:17:29.110953 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Stream added, broadcasting: 5
I0607 13:17:29.111860 6 log.go:172] (0xc0021de2c0) Reply frame received for 5
I0607 13:17:29.188070 6 log.go:172] (0xc0021de2c0) Data frame received for 5
I0607 13:17:29.188098 6 log.go:172] (0xc0017b41e0) (5) Data frame handling
I0607 13:17:29.188131 6 log.go:172] (0xc0021de2c0) Data frame received for 3
I0607 13:17:29.188158 6 log.go:172] (0xc001acb860) (3) Data frame handling
I0607 13:17:29.188182 6 log.go:172] (0xc001acb860) (3) Data frame sent
I0607 13:17:29.188194 6 log.go:172] (0xc0021de2c0) Data frame received for 3
I0607 13:17:29.188206 6 log.go:172] (0xc001acb860) (3) Data frame handling
I0607 13:17:29.189637 6 log.go:172] (0xc0021de2c0) Data frame received for 1
I0607 13:17:29.189664 6 log.go:172] (0xc002ac9cc0) (1) Data frame handling
I0607 13:17:29.189698 6 log.go:172] (0xc002ac9cc0) (1) Data frame sent
I0607 13:17:29.189798 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream removed, broadcasting: 1
I0607 13:17:29.189852 6 log.go:172] (0xc0021de2c0) Go away received
I0607 13:17:29.189918 6 log.go:172] (0xc0021de2c0) (0xc002ac9cc0) Stream removed, broadcasting: 1
I0607 13:17:29.189936 6 log.go:172] (0xc0021de2c0) (0xc001acb860) Stream removed, broadcasting: 3
I0607 13:17:29.189949 6 log.go:172] (0xc0021de2c0) (0xc0017b41e0) Stream removed, broadcasting: 5
Jun 7 13:17:29.189: INFO: Exec stderr: ""
Jun 7 13:17:29.189: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:29.190: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:29.224405 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Create stream
I0607 13:17:29.224435 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream added, broadcasting: 1
I0607 13:17:29.226919 6 log.go:172] (0xc001d43a20) Reply frame received for 1
I0607 13:17:29.226958 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Create stream
I0607 13:17:29.226974 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Stream added, broadcasting: 3
I0607 13:17:29.227953 6 log.go:172] (0xc001d43a20) Reply frame received for 3
I0607 13:17:29.227982 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Create stream
I0607 13:17:29.227992 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Stream added, broadcasting: 5
I0607 13:17:29.228978 6 log.go:172] (0xc001d43a20) Reply frame received for 5
I0607 13:17:29.298718 6 log.go:172] (0xc001d43a20) Data frame received for 5
I0607 13:17:29.298749 6 log.go:172] (0xc0017b4b40) (5) Data frame handling
I0607 13:17:29.298780 6 log.go:172] (0xc001d43a20) Data frame received for 3
I0607 13:17:29.298804 6 log.go:172] (0xc0017b4820) (3) Data frame handling
I0607 13:17:29.298830 6 log.go:172] (0xc0017b4820) (3) Data frame sent
I0607 13:17:29.298850 6 log.go:172] (0xc001d43a20) Data frame received for 3
I0607 13:17:29.298878 6 log.go:172] (0xc0017b4820) (3) Data frame handling
I0607 13:17:29.300230 6 log.go:172] (0xc001d43a20) Data frame received for 1
I0607 13:17:29.300273 6 log.go:172] (0xc0017b4780) (1) Data frame handling
I0607 13:17:29.300298 6 log.go:172] (0xc0017b4780) (1) Data frame sent
I0607 13:17:29.300321 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream removed, broadcasting: 1
I0607 13:17:29.300345 6 log.go:172] (0xc001d43a20) Go away received
I0607 13:17:29.300464 6 log.go:172] (0xc001d43a20) (0xc0017b4780) Stream removed, broadcasting: 1
I0607 13:17:29.300488 6 log.go:172] (0xc001d43a20) (0xc0017b4820) Stream removed, broadcasting: 3
I0607 13:17:29.300508 6 log.go:172] (0xc001d43a20) (0xc0017b4b40) Stream removed, broadcasting: 5
Jun 7 13:17:29.300: INFO: Exec stderr: ""
Jun 7 13:17:29.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:29.300: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:29.326420 6 log.go:172] (0xc0021ded10) (0xc001218000) Create stream
I0607 13:17:29.326447 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream added, broadcasting: 1
I0607 13:17:29.328682 6 log.go:172] (0xc0021ded10) Reply frame received for 1
I0607 13:17:29.328706 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Create stream
I0607 13:17:29.328718 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Stream added, broadcasting: 3
I0607 13:17:29.329766 6 log.go:172] (0xc0021ded10) Reply frame received for 3
I0607 13:17:29.329818 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Create stream
I0607 13:17:29.329832 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Stream added, broadcasting: 5
I0607 13:17:29.330839 6 log.go:172] (0xc0021ded10) Reply frame received for 5
I0607 13:17:29.398728 6 log.go:172] (0xc0021ded10) Data frame received for 5
I0607 13:17:29.398783 6 log.go:172] (0xc0025c1ae0) (5) Data frame handling
I0607 13:17:29.398811 6 log.go:172] (0xc0021ded10) Data frame received for 3
I0607 13:17:29.398826 6 log.go:172] (0xc0025c1a40) (3) Data frame handling
I0607 13:17:29.398843 6 log.go:172] (0xc0025c1a40) (3) Data frame sent
I0607 13:17:29.398864 6 log.go:172] (0xc0021ded10) Data frame received for 3
I0607 13:17:29.398877 6 log.go:172] (0xc0025c1a40) (3) Data frame handling
I0607 13:17:29.401579 6 log.go:172] (0xc0021ded10) Data frame received for 1
I0607 13:17:29.401617 6 log.go:172] (0xc001218000) (1) Data frame handling
I0607 13:17:29.401664 6 log.go:172] (0xc001218000) (1) Data frame sent
I0607 13:17:29.401692 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream removed, broadcasting: 1
I0607 13:17:29.401723 6 log.go:172] (0xc0021ded10) Go away received
I0607 13:17:29.401927 6 log.go:172] (0xc0021ded10) (0xc001218000) Stream removed, broadcasting: 1
I0607 13:17:29.401963 6 log.go:172] (0xc0021ded10) (0xc0025c1a40) Stream removed, broadcasting: 3
I0607 13:17:29.401991 6 log.go:172] (0xc0021ded10) (0xc0025c1ae0) Stream removed, broadcasting: 5
Jun 7 13:17:29.402: INFO: Exec stderr: ""
Jun 7 13:17:29.402: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 7 13:17:29.402: INFO: >>> kubeConfig: /root/.kube/config
I0607 13:17:29.433852 6 log.go:172] (0xc000288630) (0xc001d7e000) Create stream
I0607 13:17:29.433878 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream added, broadcasting: 1
I0607 13:17:29.435805 6 log.go:172] (0xc000288630) Reply frame received for 1
I0607 13:17:29.435840 6 log.go:172] (0xc000288630) (0xc001a8e000) Create stream
I0607 13:17:29.435855 6 log.go:172] (0xc000288630) (0xc001a8e000) Stream added, broadcasting: 3
I0607 13:17:29.436616 6 log.go:172] (0xc000288630) Reply frame received for 3
I0607 13:17:29.436639 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Create stream
I0607 13:17:29.436648 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Stream added, broadcasting: 5
I0607 13:17:29.437690 6 log.go:172] (0xc000288630) Reply frame received for 5
I0607 13:17:29.514648 6 log.go:172] (0xc000288630) Data frame received for 5
I0607 13:17:29.514689 6 log.go:172] (0xc001a8e0a0) (5) Data frame handling
I0607 13:17:29.514720 6 log.go:172] (0xc000288630) Data frame received for 3
I0607 13:17:29.514740 6 log.go:172] (0xc001a8e000) (3) Data frame handling
I0607 13:17:29.514760 6 log.go:172] (0xc001a8e000) (3) Data frame sent
I0607 13:17:29.514780 6 log.go:172] (0xc000288630) Data frame received for 3
I0607 13:17:29.514795 6 log.go:172] (0xc001a8e000) (3) Data frame handling
I0607 13:17:29.516753 6 log.go:172] (0xc000288630) Data frame received for 1
I0607 13:17:29.516782 6 log.go:172] (0xc001d7e000) (1) Data frame handling
I0607 13:17:29.516795 6 log.go:172] (0xc001d7e000) (1) Data frame sent
I0607 13:17:29.516819 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream removed, broadcasting: 1
I0607 13:17:29.516850 6 log.go:172] (0xc000288630) Go away received
I0607 13:17:29.516971 6 log.go:172] (0xc000288630) (0xc001d7e000) Stream removed, broadcasting: 1
I0607 13:17:29.516995 6 log.go:172] (0xc000288630) (0xc001a8e000) Stream removed, broadcasting: 3
I0607 13:17:29.517005 6 log.go:172] (0xc000288630) (0xc001a8e0a0) Stream removed, broadcasting: 5
Jun 7 13:17:29.517: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:17:29.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6415" for this suite.
Jun 7 13:18:09.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:18:09.603: INFO: namespace e2e-kubelet-etc-hosts-6415 deletion completed in 40.082276319s
• [SLOW TEST:51.328 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet
should adopt matching pods on creation and release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:18:09.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jun 7 13:18:14.748: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:18:15.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1951" for this suite.
Jun 7 13:18:37.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:18:37.920: INFO: namespace replicaset-1951 deletion completed in 22.097117762s
• [SLOW TEST:28.317 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
should adopt matching pods on creation and release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts
should mount an API token into pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:18:37.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jun 7 13:18:42.599: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jun 7 13:18:45.867: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jun 7 13:18:46.071: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3190 pod-service-account-8c61b0b8-b534-4946-b3be-e0b1dc349fbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:18:46.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3190" for this suite.
Jun 7 13:18:52.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:18:52.380: INFO: namespace svcaccounts-3190 deletion completed in 6.103410227s
• [SLOW TEST:14.460 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
should mount an API token into pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server
should support proxy with --port 0 [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:18:52.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0 [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jun 7 13:18:52.435: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:18:52.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7459" for this suite.
Jun 7 13:18:58.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:18:58.625: INFO: namespace kubectl-7459 deletion completed in 6.093451624s
• [SLOW TEST:6.244 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Proxy server
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should support proxy with --port 0 [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:18:58.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-c9b4eeb7-1b68-4d26-b85b-b4a199841150
STEP: Creating a pod to test consume secrets
Jun 7 13:18:58.724: INFO: Waiting up to 5m0s for pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532" in namespace "secrets-6504" to be "success or failure"
Jun 7 13:18:58.726: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517713ms
Jun 7 13:19:00.805: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081474213s
Jun 7 13:19:02.809: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085534832s
STEP: Saw pod success
Jun 7 13:19:02.809: INFO: Pod "pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532" satisfied condition "success or failure"
Jun 7 13:19:02.812: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 container secret-volume-test:
STEP: delete the pod
Jun 7 13:19:02.854: INFO: Waiting for pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 to disappear
Jun 7 13:19:02.858: INFO: Pod pod-secrets-8ef517ce-ddc3-486a-9a4c-bfda5bfcc532 no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:19:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6504" for this suite.
Jun 7 13:19:08.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:19:08.954: INFO: namespace secrets-6504 deletion completed in 6.092079417s
• [SLOW TEST:10.329 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:19:08.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 in namespace container-probe-8992
Jun 7 13:19:13.070: INFO: Started pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 in namespace container-probe-8992
STEP: checking the pod's current state and verifying that restartCount is present
Jun 7 13:19:13.074: INFO: Initial restart count of pod test-webserver-dfe731fd-9d31-4c32-81bf-7bac0a3f0c76 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:23:13.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8992" for this suite.
Jun 7 13:23:19.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:23:19.964: INFO: namespace container-probe-8992 deletion completed in 6.113615575s
• [SLOW TEST:251.011 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod
should have an terminated reason [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:23:19.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:23:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2821" for this suite.
Jun 7 13:23:30.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:23:30.133: INFO: namespace kubelet-test-2821 deletion completed in 6.098969115s
• [SLOW TEST:10.168 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
when scheduling a busybox command that always fails in a pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
should have an terminated reason [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:23:30.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-bec98581-a549-406e-8886-5e3b10b241fe in namespace container-probe-7425
Jun 7 13:23:34.251: INFO: Started pod busybox-bec98581-a549-406e-8886-5e3b10b241fe in namespace container-probe-7425
STEP: checking the pod's current state and verifying that restartCount is present
Jun 7 13:23:34.255: INFO: Initial restart count of pod busybox-bec98581-a549-406e-8886-5e3b10b241fe is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:27:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7425" for this suite.
Jun 7 13:27:41.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:27:41.897: INFO: namespace container-probe-7425 deletion completed in 6.290522068s
• [SLOW TEST:251.764 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:27:41.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 in namespace container-probe-7092
Jun 7 13:27:48.061: INFO: Started pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 in namespace container-probe-7092
STEP: checking the pod's current state and verifying that restartCount is present
Jun 7 13:27:48.064: INFO: Initial restart count of pod liveness-0d92a951-b044-4829-a0d6-4742869814d2 is 0
Jun 7 13:28:10.447: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 1 (22.383703652s elapsed)
Jun 7 13:28:28.522: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 2 (40.458100017s elapsed)
Jun 7 13:28:50.646: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 3 (1m2.581926673s elapsed)
Jun 7 13:29:08.732: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 4 (1m20.667882854s elapsed)
Jun 7 13:30:11.314: INFO: Restart count of pod container-probe-7092/liveness-0d92a951-b044-4829-a0d6-4742869814d2 is now 5 (2m23.250619437s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:30:11.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7092" for this suite.
Jun 7 13:30:17.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:30:17.513: INFO: namespace container-probe-7092 deletion completed in 6.130736515s
• [SLOW TEST:155.615 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:30:17.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-41ad1cc4-df0a-4196-ae80-583a2f1d571f
STEP: Creating configMap with name cm-test-opt-upd-15a36a1c-4c22-4d91-91f0-e266eec46df3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-41ad1cc4-df0a-4196-ae80-583a2f1d571f
STEP: Updating configmap cm-test-opt-upd-15a36a1c-4c22-4d91-91f0-e266eec46df3
STEP: Creating configMap with name cm-test-opt-create-311dd219-5a30-416e-b5a3-e06b45c260e2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:31:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7006" for this suite.
Jun 7 13:32:00.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:32:00.635: INFO: namespace configmap-7006 deletion completed in 24.161527619s
• [SLOW TEST:103.122 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:32:00.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 7 13:32:00.885: INFO: Waiting up to 5m0s for pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d" in namespace "emptydir-5407" to be "success or failure"
Jun 7 13:32:00.912: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.233616ms
Jun 7 13:32:02.917: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031332447s
Jun 7 13:32:04.921: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036010665s
Jun 7 13:32:06.964: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079176353s
STEP: Saw pod success
Jun 7 13:32:06.964: INFO: Pod "pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d" satisfied condition "success or failure"
Jun 7 13:32:06.968: INFO: Trying to get logs from node iruya-worker pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d container test-container:
STEP: delete the pod
Jun 7 13:32:07.016: INFO: Waiting for pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d to disappear
Jun 7 13:32:07.026: INFO: Pod pod-07ad280e-32fa-40ed-bbe5-ef5597b0515d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:32:07.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5407" for this suite.
Jun 7 13:32:13.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:32:13.190: INFO: namespace emptydir-5407 deletion completed in 6.161175439s
• [SLOW TEST:12.554 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume
should provide container's memory request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:32:13.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jun 7 13:32:13.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d" in namespace "downward-api-4527" to be "success or failure"
Jun 7 13:32:13.371: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.090916ms
Jun 7 13:32:15.374: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048656596s
Jun 7 13:32:17.378: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052571316s
Jun 7 13:32:19.383: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057138036s
STEP: Saw pod success
Jun 7 13:32:19.383: INFO: Pod "downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d" satisfied condition "success or failure"
Jun 7 13:32:19.386: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d container client-container:
STEP: delete the pod
Jun 7 13:32:19.531: INFO: Waiting for pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d to disappear
Jun 7 13:32:19.695: INFO: Pod downwardapi-volume-2470ff2d-9c6b-4dd8-9b93-2e07e4f5c21d no longer exists
[AfterEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:32:19.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4527" for this suite.
Jun 7 13:32:25.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:32:25.803: INFO: namespace downward-api-4527 deletion completed in 6.104317073s
• [SLOW TEST:12.613 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
should provide container's memory request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:32:25.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-6139586d-a14c-41d7-ad50-f87568471026
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-6139586d-a14c-41d7-ad50-f87568471026
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:33:42.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4011" for this suite.
Jun 7 13:34:06.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:34:06.113: INFO: namespace projected-4011 deletion completed in 24.108701582s
• [SLOW TEST:100.309 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:34:06.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3886
[It] Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3886
STEP: Creating statefulset with conflicting port in namespace statefulset-3886
STEP: Waiting until pod test-pod will start running in namespace statefulset-3886
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3886
Jun 7 13:34:12.465: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Pending. Waiting for statefulset controller to delete.
Jun 7 13:34:12.575: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Failed. Waiting for statefulset controller to delete.
Jun 7 13:34:12.592: INFO: Observed stateful pod in namespace: statefulset-3886, name: ss-0, uid: 644b33f7-8daa-42d1-89b8-09caa87fb653, status phase: Failed. Waiting for statefulset controller to delete.
Jun 7 13:34:12.670: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3886
STEP: Removing pod with conflicting port in namespace statefulset-3886
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3886 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jun 7 13:34:18.864: INFO: Deleting all statefulset in ns statefulset-3886
Jun 7 13:34:18.867: INFO: Scaling statefulset ss to 0
Jun 7 13:34:28.898: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:34:28.900: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:34:29.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3886" for this suite.
Jun 7 13:34:37.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:34:37.251: INFO: namespace statefulset-3886 deletion completed in 8.231370696s
• [SLOW TEST:31.138 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:34:37.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-dc3c4960-0091-4a0b-94c4-0c8f0a28e682
STEP: Creating a pod to test consume configMaps
Jun 7 13:34:37.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b" in namespace "configmap-8622" to be "success or failure"
Jun 7 13:34:37.992: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.900428ms
Jun 7 13:34:40.029: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039482652s
Jun 7 13:34:42.188: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198840671s
Jun 7 13:34:44.192: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202467039s
STEP: Saw pod success
Jun 7 13:34:44.192: INFO: Pod "pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b" satisfied condition "success or failure"
Jun 7 13:34:44.195: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b container configmap-volume-test:
STEP: delete the pod
Jun 7 13:34:44.245: INFO: Waiting for pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b to disappear
Jun 7 13:34:44.268: INFO: Pod pod-configmaps-5234208e-a5b8-44ae-a188-5fcc33c4fc4b no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:34:44.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8622" for this suite.
Jun 7 13:34:50.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:34:50.445: INFO: namespace configmap-8622 deletion completed in 6.172643069s
• [SLOW TEST:13.193 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI
should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:34:50.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jun 7 13:34:50.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6" in namespace "projected-3091" to be "success or failure"
Jun 7 13:34:50.599: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.01838ms
Jun 7 13:34:52.604: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02741891s
Jun 7 13:34:54.624: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047688217s
Jun 7 13:34:56.628: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051974473s
STEP: Saw pod success
Jun 7 13:34:56.628: INFO: Pod "downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6" satisfied condition "success or failure"
Jun 7 13:34:56.632: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 container client-container:
STEP: delete the pod
Jun 7 13:34:56.917: INFO: Waiting for pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 to disappear
Jun 7 13:34:56.929: INFO: Pod downwardapi-volume-a389c160-7076-4d31-8f8e-35c5dda216a6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:34:56.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3091" for this suite.
Jun 7 13:35:03.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:35:03.218: INFO: namespace projected-3091 deletion completed in 6.135923121s
• [SLOW TEST:12.773 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Aggregator
Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:35:03.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jun 7 13:35:03.357: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jun 7 13:35:04.494: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jun 7 13:35:07.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:35:09.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:35:11.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727133704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 7 13:35:14.080: INFO: Waited 632.862701ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:35:15.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4358" for this suite.
Jun 7 13:35:21.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:35:21.528: INFO: namespace aggregator-4358 deletion completed in 6.298890554s
• [SLOW TEST:18.310 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:35:21.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 7 13:35:21.654: INFO: Waiting up to 5m0s for pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed" in namespace "emptydir-7968" to be "success or failure"
Jun 7 13:35:21.664: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.749691ms
Jun 7 13:35:23.763: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109136269s
Jun 7 13:35:25.767: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113147877s
Jun 7 13:35:27.771: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116957099s
STEP: Saw pod success
Jun 7 13:35:27.771: INFO: Pod "pod-525f3694-175e-44ee-a0b7-abf62b757bed" satisfied condition "success or failure"
Jun 7 13:35:27.774: INFO: Trying to get logs from node iruya-worker pod pod-525f3694-175e-44ee-a0b7-abf62b757bed container test-container:
STEP: delete the pod
Jun 7 13:35:27.849: INFO: Waiting for pod pod-525f3694-175e-44ee-a0b7-abf62b757bed to disappear
Jun 7 13:35:27.912: INFO: Pod pod-525f3694-175e-44ee-a0b7-abf62b757bed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:35:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7968" for this suite.
Jun 7 13:35:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:35:36.165: INFO: namespace emptydir-7968 deletion completed in 8.248455123s
• [SLOW TEST:14.636 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers
should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:35:36.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jun 7 13:35:36.394: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155040,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 7 13:35:36.394: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155040,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jun 7 13:35:46.408: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155060,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jun 7 13:35:46.408: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155060,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jun 7 13:35:56.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155079,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 7 13:35:56.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155079,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jun 7 13:36:06.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155099,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 7 13:36:06.424: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-a,UID:d3214c4b-a280-4737-acd4-d91932494983,ResourceVersion:15155099,Generation:0,CreationTimestamp:2020-06-07 13:35:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jun 7 13:36:16.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155119,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 7 13:36:16.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155119,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jun 7 13:36:26.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155140,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 7 13:36:26.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-748,SelfLink:/api/v1/namespaces/watch-748/configmaps/e2e-watch-test-configmap-b,UID:6ee129d0-cc26-4b52-a637-3bf7f7d0f711,ResourceVersion:15155140,Generation:0,CreationTimestamp:2020-06-07 13:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:36:36.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-748" for this suite.
Jun 7 13:36:42.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:36:42.581: INFO: namespace watch-748 deletion completed in 6.137619415s
• [SLOW TEST:66.415 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update
should support rolling-update to same image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:36:42.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 7 13:36:43.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3591'
Jun 7 13:36:46.790: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 7 13:36:46.790: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jun 7 13:36:46.803: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jun 7 13:36:46.878: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jun 7 13:36:46.886: INFO: scanned /root for discovery docs:
Jun 7 13:36:46.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3591'
Jun 7 13:37:03.127: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jun 7 13:37:03.127: INFO: stdout: "Created e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb\nScaling up e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jun 7 13:37:03.127: INFO: stdout: "Created e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb\nScaling up e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jun 7 13:37:03.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3591'
Jun 7 13:37:03.216: INFO: stderr: ""
Jun 7 13:37:03.216: INFO: stdout: "e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 "
Jun 7 13:37:03.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3591'
Jun 7 13:37:03.382: INFO: stderr: ""
Jun 7 13:37:03.382: INFO: stdout: "true"
Jun 7 13:37:03.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3591'
Jun 7 13:37:03.480: INFO: stderr: ""
Jun 7 13:37:03.480: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jun 7 13:37:03.480: INFO: e2e-test-nginx-rc-ced83a70a73340b28c813c3dc3af33eb-9qlf4 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jun 7 13:37:03.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3591'
Jun 7 13:37:03.620: INFO: stderr: ""
Jun 7 13:37:03.621: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:37:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3591" for this suite.
Jun 7 13:37:27.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:37:27.957: INFO: namespace kubectl-3591 deletion completed in 24.284924498s
• [SLOW TEST:45.376 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl rolling-update
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should support rolling-update to same image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:37:27.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9899
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9899
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9899
Jun 7 13:37:28.142: INFO: Found 0 stateful pods, waiting for 1
Jun 7 13:37:38.149: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jun 7 13:37:38.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:37:38.719: INFO: stderr: "I0607 13:37:38.282983 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Create stream\nI0607 13:37:38.283036 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream added, broadcasting: 1\nI0607 13:37:38.285501 729 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0607 13:37:38.285552 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Create stream\nI0607 13:37:38.285576 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Stream added, broadcasting: 3\nI0607 13:37:38.286691 729 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0607 13:37:38.286738 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Create stream\nI0607 13:37:38.286771 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Stream added, broadcasting: 5\nI0607 13:37:38.287742 729 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0607 13:37:38.374092 729 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:37:38.374124 729 log.go:172] (0xc0001fc8c0) (5) Data frame handling\nI0607 13:37:38.374144 729 log.go:172] (0xc0001fc8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:37:38.708943 729 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:37:38.709001 729 log.go:172] (0xc0006e6000) (3) Data frame handling\nI0607 13:37:38.709028 729 log.go:172] (0xc0006e6000) (3) Data frame sent\nI0607 13:37:38.709047 729 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:37:38.709064 729 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:37:38.709078 729 log.go:172] (0xc0001fc8c0) (5) Data frame handling\nI0607 13:37:38.709269 729 log.go:172] (0xc0006e6000) (3) Data frame handling\nI0607 13:37:38.711345 729 log.go:172] (0xc000116dc0) Data frame received for 1\nI0607 13:37:38.711366 729 log.go:172] (0xc0001fc820) (1) Data frame handling\nI0607 13:37:38.711384 729 log.go:172] (0xc0001fc820) (1) Data frame sent\nI0607 13:37:38.711398 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream removed, broadcasting: 1\nI0607 13:37:38.711509 729 log.go:172] (0xc000116dc0) Go away received\nI0607 13:37:38.711669 729 log.go:172] (0xc000116dc0) (0xc0001fc820) Stream removed, broadcasting: 1\nI0607 13:37:38.711685 729 log.go:172] (0xc000116dc0) (0xc0006e6000) Stream removed, broadcasting: 3\nI0607 13:37:38.711694 729 log.go:172] (0xc000116dc0) (0xc0001fc8c0) Stream removed, broadcasting: 5\n"
Jun 7 13:37:38.719: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:37:38.719: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:37:38.723: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 7 13:37:48.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:37:48.742: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:37:48.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999385s
Jun 7 13:37:49.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.966490487s
Jun 7 13:37:50.796: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961905394s
Jun 7 13:37:51.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.956968059s
Jun 7 13:37:52.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951345836s
Jun 7 13:37:53.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.947498855s
Jun 7 13:37:54.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.942674688s
Jun 7 13:37:55.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.93828567s
Jun 7 13:37:56.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933892142s
Jun 7 13:37:57.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 929.930262ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9899
Jun 7 13:37:58.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:37:59.594: INFO: stderr: "I0607 13:37:59.482521 751 log.go:172] (0xc000a12420) (0xc0007768c0) Create stream\nI0607 13:37:59.482588 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream added, broadcasting: 1\nI0607 13:37:59.486138 751 log.go:172] (0xc000a12420) Reply frame received for 1\nI0607 13:37:59.486172 751 log.go:172] (0xc000a12420) (0xc000612320) Create stream\nI0607 13:37:59.486182 751 log.go:172] (0xc000a12420) (0xc000612320) Stream added, broadcasting: 3\nI0607 13:37:59.487080 751 log.go:172] (0xc000a12420) Reply frame received for 3\nI0607 13:37:59.487131 751 log.go:172] (0xc000a12420) (0xc000776000) Create stream\nI0607 13:37:59.487155 751 log.go:172] (0xc000a12420) (0xc000776000) Stream added, broadcasting: 5\nI0607 13:37:59.488043 751 log.go:172] (0xc000a12420) Reply frame received for 5\nI0607 13:37:59.586799 751 log.go:172] (0xc000a12420) Data frame received for 3\nI0607 13:37:59.586842 751 log.go:172] (0xc000612320) (3) Data frame handling\nI0607 13:37:59.586855 751 log.go:172] (0xc000612320) (3) Data frame sent\nI0607 13:37:59.586868 751 log.go:172] (0xc000a12420) Data frame received for 3\nI0607 13:37:59.586882 751 log.go:172] (0xc000612320) (3) Data frame handling\nI0607 13:37:59.586920 751 log.go:172] (0xc000a12420) Data frame received for 5\nI0607 13:37:59.586945 751 log.go:172] (0xc000776000) (5) Data frame handling\nI0607 13:37:59.586971 751 log.go:172] (0xc000776000) (5) Data frame sent\nI0607 13:37:59.586988 751 log.go:172] (0xc000a12420) Data frame received for 5\nI0607 13:37:59.587015 751 log.go:172] (0xc000776000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:37:59.588373 751 log.go:172] (0xc000a12420) Data frame received for 1\nI0607 13:37:59.588418 751 log.go:172] (0xc0007768c0) (1) Data frame handling\nI0607 13:37:59.588448 751 log.go:172] (0xc0007768c0) (1) Data frame sent\nI0607 13:37:59.588656 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream removed, broadcasting: 1\nI0607 13:37:59.589052 751 log.go:172] (0xc000a12420) (0xc0007768c0) Stream removed, broadcasting: 1\nI0607 13:37:59.589075 751 log.go:172] (0xc000a12420) (0xc000612320) Stream removed, broadcasting: 3\nI0607 13:37:59.589084 751 log.go:172] (0xc000a12420) (0xc000776000) Stream removed, broadcasting: 5\n"
Jun 7 13:37:59.594: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:37:59.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:37:59.598: INFO: Found 1 stateful pods, waiting for 3
Jun 7 13:38:09.602: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:38:09.602: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:38:09.602: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jun 7 13:38:19.604: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:38:19.604: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:38:19.604: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jun 7 13:38:19.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:38:19.830: INFO: stderr: "I0607 13:38:19.734258 770 log.go:172] (0xc0009280b0) (0xc00090a640) Create stream\nI0607 13:38:19.734309 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream added, broadcasting: 1\nI0607 13:38:19.736729 770 log.go:172] (0xc0009280b0) Reply frame received for 1\nI0607 13:38:19.736763 770 log.go:172] (0xc0009280b0) (0xc000998000) Create stream\nI0607 13:38:19.736773 770 log.go:172] (0xc0009280b0) (0xc000998000) Stream added, broadcasting: 3\nI0607 13:38:19.737771 770 log.go:172] (0xc0009280b0) Reply frame received for 3\nI0607 13:38:19.737794 770 log.go:172] (0xc0009280b0) (0xc000626280) Create stream\nI0607 13:38:19.737802 770 log.go:172] (0xc0009280b0) (0xc000626280) Stream added, broadcasting: 5\nI0607 13:38:19.738643 770 log.go:172] (0xc0009280b0) Reply frame received for 5\nI0607 13:38:19.821840 770 log.go:172] (0xc0009280b0) Data frame received for 3\nI0607 13:38:19.821874 770 log.go:172] (0xc000998000) (3) Data frame handling\nI0607 13:38:19.821886 770 log.go:172] (0xc000998000) (3) Data frame sent\nI0607 13:38:19.821909 770 log.go:172] (0xc0009280b0) Data frame received for 5\nI0607 13:38:19.821917 770 log.go:172] (0xc000626280) (5) Data frame handling\nI0607 13:38:19.821925 770 log.go:172] (0xc000626280) (5) Data frame sent\nI0607 13:38:19.821933 770 log.go:172] (0xc0009280b0) Data frame received for 5\nI0607 13:38:19.821939 770 log.go:172] (0xc000626280) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:19.822070 770 log.go:172] (0xc0009280b0) Data frame received for 3\nI0607 13:38:19.822093 770 log.go:172] (0xc000998000) (3) Data frame handling\nI0607 13:38:19.823654 770 log.go:172] (0xc0009280b0) Data frame received for 1\nI0607 13:38:19.823679 770 log.go:172] (0xc00090a640) (1) Data frame handling\nI0607 13:38:19.823694 770 log.go:172] (0xc00090a640) (1) Data frame sent\nI0607 13:38:19.823723 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream removed, broadcasting: 1\nI0607 13:38:19.823754 770 log.go:172] (0xc0009280b0) Go away received\nI0607 13:38:19.824069 770 log.go:172] (0xc0009280b0) (0xc00090a640) Stream removed, broadcasting: 1\nI0607 13:38:19.824085 770 log.go:172] (0xc0009280b0) (0xc000998000) Stream removed, broadcasting: 3\nI0607 13:38:19.824096 770 log.go:172] (0xc0009280b0) (0xc000626280) Stream removed, broadcasting: 5\n"
Jun 7 13:38:19.831: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:38:19.831: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:38:19.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:38:20.121: INFO: stderr: "I0607 13:38:19.956386 789 log.go:172] (0xc00096c420) (0xc000596820) Create stream\nI0607 13:38:19.956459 789 log.go:172] (0xc00096c420) (0xc000596820) Stream added, broadcasting: 1\nI0607 13:38:19.961457 789 log.go:172] (0xc00096c420) Reply frame received for 1\nI0607 13:38:19.961492 789 log.go:172] (0xc00096c420) (0xc0003021e0) Create stream\nI0607 13:38:19.961502 789 log.go:172] (0xc00096c420) (0xc0003021e0) Stream added, broadcasting: 3\nI0607 13:38:19.962339 789 log.go:172] (0xc00096c420) Reply frame received for 3\nI0607 13:38:19.962369 789 log.go:172] (0xc00096c420) (0xc000596000) Create stream\nI0607 13:38:19.962379 789 log.go:172] (0xc00096c420) (0xc000596000) Stream added, broadcasting: 5\nI0607 13:38:19.963054 789 log.go:172] (0xc00096c420) Reply frame received for 5\nI0607 13:38:20.075796 789 log.go:172] (0xc00096c420) Data frame received for 5\nI0607 13:38:20.075836 789 log.go:172] (0xc000596000) (5) Data frame handling\nI0607 13:38:20.075856 789 log.go:172] (0xc000596000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:20.110989 789 log.go:172] (0xc00096c420) Data frame received for 3\nI0607 13:38:20.111008 789 log.go:172] (0xc0003021e0) (3) Data frame handling\nI0607 13:38:20.111025 789 log.go:172] (0xc0003021e0) (3) Data frame sent\nI0607 13:38:20.111221 789 log.go:172] (0xc00096c420) Data frame received for 3\nI0607 13:38:20.111244 789 log.go:172] (0xc0003021e0) (3) Data frame handling\nI0607 13:38:20.111497 789 log.go:172] (0xc00096c420) Data frame received for 5\nI0607 13:38:20.111517 789 log.go:172] (0xc000596000) (5) Data frame handling\nI0607 13:38:20.114107 789 log.go:172] (0xc00096c420) Data frame received for 1\nI0607 13:38:20.114128 789 log.go:172] (0xc000596820) (1) Data frame handling\nI0607 13:38:20.114138 789 log.go:172] (0xc000596820) (1) Data frame sent\nI0607 13:38:20.114150 789 log.go:172] (0xc00096c420) (0xc000596820) Stream removed, broadcasting: 1\nI0607 13:38:20.114200 789 log.go:172] (0xc00096c420) Go away received\nI0607 13:38:20.114446 789 log.go:172] (0xc00096c420) (0xc000596820) Stream removed, broadcasting: 1\nI0607 13:38:20.114462 789 log.go:172] (0xc00096c420) (0xc0003021e0) Stream removed, broadcasting: 3\nI0607 13:38:20.114471 789 log.go:172] (0xc00096c420) (0xc000596000) Stream removed, broadcasting: 5\n"
Jun 7 13:38:20.121: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:38:20.121: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:38:20.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:38:20.355: INFO: stderr: "I0607 13:38:20.243735 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Create stream\nI0607 13:38:20.243791 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream added, broadcasting: 1\nI0607 13:38:20.247789 810 log.go:172] (0xc000a36210) Reply frame received for 1\nI0607 13:38:20.247835 810 log.go:172] (0xc000a36210) (0xc00088a000) Create stream\nI0607 13:38:20.247846 810 log.go:172] (0xc000a36210) (0xc00088a000) Stream added, broadcasting: 3\nI0607 13:38:20.248749 810 log.go:172] (0xc000a36210) Reply frame received for 3\nI0607 13:38:20.248776 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Create stream\nI0607 13:38:20.248785 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Stream added, broadcasting: 5\nI0607 13:38:20.249507 810 log.go:172] (0xc000a36210) Reply frame received for 5\nI0607 13:38:20.300813 810 log.go:172] (0xc000a36210) Data frame received for 5\nI0607 13:38:20.300840 810 log.go:172] (0xc00088a0a0) (5) Data frame handling\nI0607 13:38:20.300859 810 log.go:172] (0xc00088a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:38:20.346664 810 log.go:172] (0xc000a36210) Data frame received for 3\nI0607 13:38:20.346710 810 log.go:172] (0xc00088a000) (3) Data frame handling\nI0607 13:38:20.346814 810 log.go:172] (0xc00088a000) (3) Data frame sent\nI0607 13:38:20.347180 810 log.go:172] (0xc000a36210) Data frame received for 5\nI0607 13:38:20.347208 810 log.go:172] (0xc00088a0a0) (5) Data frame handling\nI0607 13:38:20.347227 810 log.go:172] (0xc000a36210) Data frame received for 3\nI0607 13:38:20.347234 810 log.go:172] (0xc00088a000) (3) Data frame handling\nI0607 13:38:20.349507 810 log.go:172] (0xc000a36210) Data frame received for 1\nI0607 13:38:20.349528 810 log.go:172] (0xc0005fc3c0) (1) Data frame handling\nI0607 13:38:20.349553 810 log.go:172] (0xc0005fc3c0) (1) Data frame sent\nI0607 13:38:20.349574 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream removed, broadcasting: 1\nI0607 13:38:20.349604 810 log.go:172] (0xc000a36210) Go away received\nI0607 13:38:20.349911 810 log.go:172] (0xc000a36210) (0xc0005fc3c0) Stream removed, broadcasting: 1\nI0607 13:38:20.349924 810 log.go:172] (0xc000a36210) (0xc00088a000) Stream removed, broadcasting: 3\nI0607 13:38:20.349930 810 log.go:172] (0xc000a36210) (0xc00088a0a0) Stream removed, broadcasting: 5\n"
Jun 7 13:38:20.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:38:20.356: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:38:20.356: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:38:20.359: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jun 7 13:38:30.367: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:38:30.367: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:38:30.367: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:38:30.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999447s
Jun 7 13:38:31.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.953371378s
Jun 7 13:38:32.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.948132706s
Jun 7 13:38:33.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.943597355s
Jun 7 13:38:34.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938067997s
Jun 7 13:38:35.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.932928584s
Jun 7 13:38:36.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929361498s
Jun 7 13:38:37.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923926385s
Jun 7 13:38:38.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919586309s
Jun 7 13:38:39.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 915.673182ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9899
Jun 7 13:38:40.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:38:40.702: INFO: stderr: "I0607 13:38:40.599845 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Create stream\nI0607 13:38:40.599911 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream added, broadcasting: 1\nI0607 13:38:40.602031 831 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0607 13:38:40.602086 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Create stream\nI0607 13:38:40.602111 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Stream added, broadcasting: 3\nI0607 13:38:40.602966 831 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0607 13:38:40.603009 831 log.go:172] (0xc000116dc0) (0xc00064c960) Create stream\nI0607 13:38:40.603024 831 log.go:172] (0xc000116dc0) (0xc00064c960) Stream added, broadcasting: 5\nI0607 13:38:40.603985 831 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0607 13:38:40.690467 831 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:38:40.690502 831 log.go:172] (0xc00064c960) (5) Data frame handling\nI0607 13:38:40.690526 831 log.go:172] (0xc00064c960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:40.694543 831 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:38:40.694577 831 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0607 13:38:40.694592 831 log.go:172] (0xc0008c0000) (3) Data frame sent\nI0607 13:38:40.694648 831 log.go:172] (0xc000116dc0) Data frame received for 5\nI0607 13:38:40.694790 831 log.go:172] (0xc00064c960) (5) Data frame handling\nI0607 13:38:40.694845 831 log.go:172] (0xc000116dc0) Data frame received for 3\nI0607 13:38:40.694872 831 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0607 13:38:40.696257 831 log.go:172] (0xc000116dc0) Data frame received for 1\nI0607 13:38:40.696276 831 log.go:172] (0xc00064c8c0) (1) Data frame handling\nI0607 13:38:40.696293 831 log.go:172] (0xc00064c8c0) (1) Data frame sent\nI0607 13:38:40.696378 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream removed, broadcasting: 1\nI0607 13:38:40.696491 831 log.go:172] (0xc000116dc0) Go away received\nI0607 13:38:40.696598 831 log.go:172] (0xc000116dc0) (0xc00064c8c0) Stream removed, broadcasting: 1\nI0607 13:38:40.696610 831 log.go:172] (0xc000116dc0) (0xc0008c0000) Stream removed, broadcasting: 3\nI0607 13:38:40.696616 831 log.go:172] (0xc000116dc0) (0xc00064c960) Stream removed, broadcasting: 5\n"
Jun 7 13:38:40.702: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:38:40.702: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:38:40.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:38:40.967: INFO: stderr: "I0607 13:38:40.886359 852 log.go:172] (0xc000448790) (0xc0007f2a00) Create stream\nI0607 13:38:40.886435 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream added, broadcasting: 1\nI0607 13:38:40.890778 852 log.go:172] (0xc000448790) Reply frame received for 1\nI0607 13:38:40.890841 852 log.go:172] (0xc000448790) (0xc0007f2000) Create stream\nI0607 13:38:40.890856 852 log.go:172] (0xc000448790) (0xc0007f2000) Stream added, broadcasting: 3\nI0607 13:38:40.891725 852 log.go:172] (0xc000448790) Reply frame received for 3\nI0607 13:38:40.891767 852 log.go:172] (0xc000448790) (0xc0007f20a0) Create stream\nI0607 13:38:40.891778 852 log.go:172] (0xc000448790) (0xc0007f20a0) Stream added, broadcasting: 5\nI0607 13:38:40.892732 852 log.go:172] (0xc000448790) Reply frame received for 5\nI0607 13:38:40.958850 852 log.go:172] (0xc000448790) Data frame received for 3\nI0607 13:38:40.958878 852 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0607 13:38:40.958899 852 log.go:172] (0xc0007f2000) (3) Data frame sent\nI0607 13:38:40.958905 852 log.go:172] (0xc000448790) Data frame received for 3\nI0607 13:38:40.958909 852 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0607 13:38:40.959016 852 log.go:172] (0xc000448790) Data frame received for 5\nI0607 13:38:40.959056 852 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0607 13:38:40.959082 852 log.go:172] (0xc0007f20a0) (5) Data frame sent\nI0607 13:38:40.959096 852 log.go:172] (0xc000448790) Data frame received for 5\nI0607 13:38:40.959110 852 log.go:172] (0xc0007f20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:40.960540 852 log.go:172] (0xc000448790) Data frame received for 1\nI0607 13:38:40.960554 852 log.go:172] (0xc0007f2a00) (1) Data frame handling\nI0607 13:38:40.960566 852 log.go:172] (0xc0007f2a00) (1) Data frame sent\nI0607 13:38:40.960575 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream removed, broadcasting: 1\nI0607 13:38:40.960585 852 log.go:172] (0xc000448790) Go away received\nI0607 13:38:40.960910 852 log.go:172] (0xc000448790) (0xc0007f2a00) Stream removed, broadcasting: 1\nI0607 13:38:40.960922 852 log.go:172] (0xc000448790) (0xc0007f2000) Stream removed, broadcasting: 3\nI0607 13:38:40.960927 852 log.go:172] (0xc000448790) (0xc0007f20a0) Stream removed, broadcasting: 5\n"
Jun 7 13:38:40.967: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:38:40.967: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:38:40.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9899 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:38:41.204: INFO: stderr: "I0607 13:38:41.125832 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Create stream\nI0607 13:38:41.125902 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream added, broadcasting: 1\nI0607 13:38:41.129404 873 log.go:172] (0xc000a7a210) Reply frame received for 1\nI0607 13:38:41.129459 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Create stream\nI0607 13:38:41.129474 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Stream added, broadcasting: 3\nI0607 13:38:41.130479 873 log.go:172] (0xc000a7a210) Reply frame received for 3\nI0607 13:38:41.130512 873 log.go:172] (0xc000a7a210) (0xc0006de280) Create stream\nI0607 13:38:41.130521 873 log.go:172] (0xc000a7a210) (0xc0006de280) Stream added, broadcasting: 5\nI0607 13:38:41.131468 873 log.go:172] (0xc000a7a210) Reply frame received for 5\nI0607 13:38:41.198557 873 log.go:172] (0xc000a7a210) Data frame received for 3\nI0607 13:38:41.198588 873 log.go:172] (0xc0006bc0a0) (3) Data frame handling\nI0607 13:38:41.198606 873 log.go:172] (0xc0006bc0a0) (3) Data frame sent\nI0607 13:38:41.198614 873 log.go:172] (0xc000a7a210) Data frame received for 3\nI0607 13:38:41.198622 873 log.go:172] (0xc0006bc0a0) (3) Data frame handling\nI0607 13:38:41.198651 873 log.go:172] (0xc000a7a210) Data frame received for 5\nI0607 13:38:41.198658 873 log.go:172] (0xc0006de280) (5) Data frame handling\nI0607 13:38:41.198670 873 log.go:172] (0xc0006de280) (5) Data frame sent\nI0607 13:38:41.198677 873 log.go:172] (0xc000a7a210) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:38:41.198682 873 log.go:172] (0xc0006de280) (5) Data frame handling\nI0607 13:38:41.200041 873 log.go:172] (0xc000a7a210) Data frame received for 1\nI0607 13:38:41.200059 873 log.go:172] (0xc0006de1e0) (1) Data frame handling\nI0607 13:38:41.200071 873 log.go:172] (0xc0006de1e0) (1) Data frame sent\nI0607 13:38:41.200082 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0607 13:38:41.200119 873 log.go:172] (0xc000a7a210) Go away received\nI0607 13:38:41.200356 873 log.go:172] (0xc000a7a210) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0607 13:38:41.200371 873 log.go:172] (0xc000a7a210) (0xc0006bc0a0) Stream removed, broadcasting: 3\nI0607 13:38:41.200377 873 log.go:172] (0xc000a7a210) (0xc0006de280) Stream removed, broadcasting: 5\n"
Jun 7 13:38:41.205: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:38:41.205: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:38:41.205: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jun 7 13:39:11.222: INFO: Deleting all statefulset in ns statefulset-9899
Jun 7 13:39:11.224: INFO: Scaling statefulset ss to 0
Jun 7 13:39:11.232: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:39:11.234: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:39:11.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9899" for this suite.
Jun 7 13:39:19.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:39:19.621: INFO: namespace statefulset-9899 deletion completed in 8.265820174s
• [SLOW TEST:111.664 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:39:19.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-73bb5d80-9328-492b-ac8d-18121c74edf1
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-73bb5d80-9328-492b-ac8d-18121c74edf1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:40:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8661" for this suite.
Jun 7 13:41:05.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:41:05.951: INFO: namespace configmap-8661 deletion completed in 22.13203767s
• [SLOW TEST:106.329 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:41:05.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 7 13:41:06.765: INFO: Waiting up to 5m0s for pod "pod-785ece4d-6601-402e-a19d-a245b14033bd" in namespace "emptydir-2938" to be "success or failure"
Jun 7 13:41:06.819: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 53.896015ms
Jun 7 13:41:08.824: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058758851s
Jun 7 13:41:11.404: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.63906955s
Jun 7 13:41:13.408: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.643599288s
STEP: Saw pod success
Jun 7 13:41:13.408: INFO: Pod "pod-785ece4d-6601-402e-a19d-a245b14033bd" satisfied condition "success or failure"
Jun 7 13:41:13.412: INFO: Trying to get logs from node iruya-worker2 pod pod-785ece4d-6601-402e-a19d-a245b14033bd container test-container:
STEP: delete the pod
Jun 7 13:41:13.563: INFO: Waiting for pod pod-785ece4d-6601-402e-a19d-a245b14033bd to disappear
Jun 7 13:41:13.596: INFO: Pod pod-785ece4d-6601-402e-a19d-a245b14033bd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:41:13.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2938" for this suite.
Jun 7 13:41:19.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:41:19.856: INFO: namespace emptydir-2938 deletion completed in 6.256105574s
• [SLOW TEST:13.904 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI
should provide container's memory limit [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:41:19.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jun 7 13:41:19.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53" in namespace "projected-1917" to be "success or failure"
Jun 7 13:41:20.022: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 36.596671ms
Jun 7 13:41:22.026: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040615354s
Jun 7 13:41:24.122: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137051837s
Jun 7 13:41:26.126: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140993254s
Jun 7 13:41:28.140: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154894539s
STEP: Saw pod success
Jun 7 13:41:28.140: INFO: Pod "downwardapi-volume-917986f4-a629-4002-996f-785958d30f53" satisfied condition "success or failure"
Jun 7 13:41:28.143: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 container client-container:
STEP: delete the pod
Jun 7 13:41:28.204: INFO: Waiting for pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 to disappear
Jun 7 13:41:28.314: INFO: Pod downwardapi-volume-917986f4-a629-4002-996f-785958d30f53 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:41:28.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1917" for this suite.
Jun 7 13:41:34.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:41:34.440: INFO: namespace projected-1917 deletion completed in 6.121328893s
• [SLOW TEST:14.584 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
should provide container's memory limit [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes
should not conflict [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:41:34.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:41:40.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9174" for this suite.
Jun 7 13:41:47.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:41:47.083: INFO: namespace emptydir-wrapper-9174 deletion completed in 6.207188114s
• [SLOW TEST:12.643 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
should not conflict [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with projected pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:41:47.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-49t2
STEP: Creating a pod to test atomic-volume-subpath
Jun 7 13:41:47.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-49t2" in namespace "subpath-2429" to be "success or failure"
Jun 7 13:41:47.936: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.152666ms
Jun 7 13:41:50.009: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102627131s
Jun 7 13:41:52.063: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156890923s
Jun 7 13:41:54.068: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161124929s
Jun 7 13:41:56.072: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 8.16578035s
Jun 7 13:41:58.076: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 10.169446504s
Jun 7 13:42:00.079: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 12.173092995s
Jun 7 13:42:02.084: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 14.177539951s
Jun 7 13:42:04.088: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 16.181778182s
Jun 7 13:42:06.249: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 18.342141324s
Jun 7 13:42:08.253: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 20.346704032s
Jun 7 13:42:10.258: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 22.351126437s
Jun 7 13:42:12.262: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 24.355604628s
Jun 7 13:42:14.267: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 26.360688204s
Jun 7 13:42:16.271: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Running", Reason="", readiness=true. Elapsed: 28.365058531s
Jun 7 13:42:18.275: INFO: Pod "pod-subpath-test-projected-49t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.368963545s
STEP: Saw pod success
Jun 7 13:42:18.275: INFO: Pod "pod-subpath-test-projected-49t2" satisfied condition "success or failure"
Jun 7 13:42:18.278: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-49t2 container test-container-subpath-projected-49t2:
STEP: delete the pod
Jun 7 13:42:18.305: INFO: Waiting for pod pod-subpath-test-projected-49t2 to disappear
Jun 7 13:42:18.322: INFO: Pod pod-subpath-test-projected-49t2 no longer exists
STEP: Deleting pod pod-subpath-test-projected-49t2
Jun 7 13:42:18.322: INFO: Deleting pod "pod-subpath-test-projected-49t2" in namespace "subpath-2429"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:42:18.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2429" for this suite.
Jun 7 13:42:24.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:42:24.435: INFO: namespace subpath-2429 deletion completed in 6.107916649s
• [SLOW TEST:37.352 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
should support subpaths with projected pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:42:24.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2784
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jun 7 13:42:24.641: INFO: Found 0 stateful pods, waiting for 3
Jun 7 13:42:34.645: INFO: Found 2 stateful pods, waiting for 3
Jun 7 13:42:44.647: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:42:44.647: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:42:44.647: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jun 7 13:42:44.674: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jun 7 13:42:54.726: INFO: Updating stateful set ss2
Jun 7 13:42:54.839: INFO: Waiting for Pod statefulset-2784/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jun 7 13:43:05.786: INFO: Found 2 stateful pods, waiting for 3
Jun 7 13:43:15.828: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:43:15.828: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:43:15.828: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jun 7 13:43:15.853: INFO: Updating stateful set ss2
Jun 7 13:43:16.020: INFO: Waiting for Pod statefulset-2784/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jun 7 13:43:26.047: INFO: Updating stateful set ss2
Jun 7 13:43:26.173: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update
Jun 7 13:43:26.173: INFO: Waiting for Pod statefulset-2784/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jun 7 13:43:36.180: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update
Jun 7 13:43:36.180: INFO: Waiting for Pod statefulset-2784/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jun 7 13:43:46.180: INFO: Waiting for StatefulSet statefulset-2784/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jun 7 13:43:56.180: INFO: Deleting all statefulset in ns statefulset-2784
Jun 7 13:43:56.183: INFO: Scaling statefulset ss2 to 0
Jun 7 13:44:26.229: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:44:26.232: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:44:26.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2784" for this suite.
Jun 7 13:44:34.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:44:34.407: INFO: namespace statefulset-2784 deletion completed in 8.130509464s
• [SLOW TEST:129.971 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Burst scaling should run to completion even with unhealthy pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:44:34.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4754
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4754
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4754
Jun 7 13:44:34.642: INFO: Found 0 stateful pods, waiting for 1
Jun 7 13:44:44.647: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jun 7 13:44:44.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:44:44.905: INFO: stderr: "I0607 13:44:44.776846 893 log.go:172] (0xc0009d2420) (0xc00090a820) Create stream\nI0607 13:44:44.776921 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream added, broadcasting: 1\nI0607 13:44:44.778936 893 log.go:172] (0xc0009d2420) Reply frame received for 1\nI0607 13:44:44.778973 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Create stream\nI0607 13:44:44.778980 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Stream added, broadcasting: 3\nI0607 13:44:44.779827 893 log.go:172] (0xc0009d2420) Reply frame received for 3\nI0607 13:44:44.779866 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Create stream\nI0607 13:44:44.779881 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Stream added, broadcasting: 5\nI0607 13:44:44.780547 893 log.go:172] (0xc0009d2420) Reply frame received for 5\nI0607 13:44:44.845382 893 log.go:172] (0xc0009d2420) Data frame received for 5\nI0607 13:44:44.845412 893 log.go:172] (0xc0005ba460) (5) Data frame handling\nI0607 13:44:44.845432 893 log.go:172] (0xc0005ba460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:44:44.896151 893 log.go:172] (0xc0009d2420) Data frame received for 5\nI0607 13:44:44.896197 893 log.go:172] (0xc0005ba460) (5) Data frame handling\nI0607 13:44:44.896230 893 log.go:172] (0xc0009d2420) Data frame received for 3\nI0607 13:44:44.896262 893 log.go:172] (0xc00090a8c0) (3) Data frame handling\nI0607 13:44:44.896293 893 log.go:172] (0xc00090a8c0) (3) Data frame sent\nI0607 13:44:44.896311 893 log.go:172] (0xc0009d2420) Data frame received for 3\nI0607 13:44:44.896324 893 log.go:172] (0xc00090a8c0) (3) Data frame handling\nI0607 13:44:44.898210 893 log.go:172] (0xc0009d2420) Data frame received for 1\nI0607 13:44:44.898236 893 log.go:172] (0xc00090a820) (1) Data frame handling\nI0607 13:44:44.898258 893 log.go:172] (0xc00090a820) (1) Data frame sent\nI0607 13:44:44.898275 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream removed, broadcasting: 1\nI0607 13:44:44.898290 893 log.go:172] (0xc0009d2420) Go away received\nI0607 13:44:44.898635 893 log.go:172] (0xc0009d2420) (0xc00090a820) Stream removed, broadcasting: 1\nI0607 13:44:44.898659 893 log.go:172] (0xc0009d2420) (0xc00090a8c0) Stream removed, broadcasting: 3\nI0607 13:44:44.898671 893 log.go:172] (0xc0009d2420) (0xc0005ba460) Stream removed, broadcasting: 5\n"
Jun 7 13:44:44.905: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:44:44.905: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:44:44.909: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 7 13:44:54.913: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:44:54.913: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:44:54.949: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:44:54.949: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:44:54.949: INFO:
Jun 7 13:44:54.949: INFO: StatefulSet ss has not reached scale 3, at 1
Jun 7 13:44:56.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971948891s
Jun 7 13:44:57.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.399548278s
Jun 7 13:44:58.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.394731839s
Jun 7 13:44:59.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.389624509s
Jun 7 13:45:00.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.34253657s
Jun 7 13:45:01.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.331038032s
Jun 7 13:45:02.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.295573123s
Jun 7 13:45:03.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.291074837s
Jun 7 13:45:04.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 286.806801ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4754
Jun 7 13:45:05.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:05.889: INFO: stderr: "I0607 13:45:05.801515 915 log.go:172] (0xc0008d8420) (0xc000360820) Create stream\nI0607 13:45:05.801573 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream added, broadcasting: 1\nI0607 13:45:05.803424 915 log.go:172] (0xc0008d8420) Reply frame received for 1\nI0607 13:45:05.803474 915 log.go:172] (0xc0008d8420) (0xc000960000) Create stream\nI0607 13:45:05.803489 915 log.go:172] (0xc0008d8420) (0xc000960000) Stream added, broadcasting: 3\nI0607 13:45:05.804211 915 log.go:172] (0xc0008d8420) Reply frame received for 3\nI0607 13:45:05.804246 915 log.go:172] (0xc0008d8420) (0xc000784000) Create stream\nI0607 13:45:05.804256 915 log.go:172] (0xc0008d8420) (0xc000784000) Stream added, broadcasting: 5\nI0607 13:45:05.805275 915 log.go:172] (0xc0008d8420) Reply frame received for 5\nI0607 13:45:05.877085 915 log.go:172] (0xc0008d8420) Data frame received for 5\nI0607 13:45:05.877286 915 log.go:172] (0xc000784000) (5) Data frame handling\nI0607 13:45:05.877308 915 log.go:172] (0xc000784000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0607 13:45:05.879969 915 log.go:172] (0xc0008d8420) Data frame received for 3\nI0607 13:45:05.879998 915 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:45:05.880015 915 log.go:172] (0xc000960000) (3) Data frame sent\nI0607 13:45:05.880181 915 log.go:172] (0xc0008d8420) Data frame received for 3\nI0607 13:45:05.880198 915 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 13:45:05.880412 915 log.go:172] (0xc0008d8420) Data frame received for 5\nI0607 13:45:05.880440 915 log.go:172] (0xc000784000) (5) Data frame handling\nI0607 13:45:05.881751 915 log.go:172] (0xc0008d8420) Data frame received for 1\nI0607 13:45:05.881766 915 log.go:172] (0xc000360820) (1) Data frame handling\nI0607 13:45:05.881785 915 log.go:172] (0xc000360820) (1) Data frame sent\nI0607 13:45:05.881802 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream removed, broadcasting: 1\nI0607 13:45:05.881817 915 log.go:172] (0xc0008d8420) Go away received\nI0607 13:45:05.882643 915 log.go:172] (0xc0008d8420) (0xc000360820) Stream removed, broadcasting: 1\nI0607 13:45:05.882699 915 log.go:172] (0xc0008d8420) (0xc000960000) Stream removed, broadcasting: 3\nI0607 13:45:05.882721 915 log.go:172] (0xc0008d8420) (0xc000784000) Stream removed, broadcasting: 5\n"
Jun 7 13:45:05.889: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:45:05.889: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:45:05.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:06.159: INFO: stderr: "I0607 13:45:06.093550 939 log.go:172] (0xc000842370) (0xc0001f8960) Create stream\nI0607 13:45:06.093639 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream added, broadcasting: 1\nI0607 13:45:06.095515 939 log.go:172] (0xc000842370) Reply frame received for 1\nI0607 13:45:06.095560 939 log.go:172] (0xc000842370) (0xc0006e0000) Create stream\nI0607 13:45:06.095593 939 log.go:172] (0xc000842370) (0xc0006e0000) Stream added, broadcasting: 3\nI0607 13:45:06.096496 939 log.go:172] (0xc000842370) Reply frame received for 3\nI0607 13:45:06.096531 939 log.go:172] (0xc000842370) (0xc0001f8a00) Create stream\nI0607 13:45:06.096543 939 log.go:172] (0xc000842370) (0xc0001f8a00) Stream added, broadcasting: 5\nI0607 13:45:06.097465 939 log.go:172] (0xc000842370) Reply frame received for 5\nI0607 13:45:06.152326 939 log.go:172] (0xc000842370) Data frame received for 3\nI0607 13:45:06.152359 939 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0607 13:45:06.152369 939 log.go:172] (0xc0006e0000) (3) Data frame sent\nI0607 13:45:06.152378 939 log.go:172] (0xc000842370) Data frame received for 3\nI0607 13:45:06.152389 939 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0607 13:45:06.152411 939 log.go:172] (0xc000842370) Data frame received for 5\nI0607 13:45:06.152421 939 log.go:172] (0xc0001f8a00) (5) Data frame handling\nI0607 13:45:06.152432 939 log.go:172] (0xc0001f8a00) (5) Data frame sent\nI0607 13:45:06.152453 939 log.go:172] (0xc000842370) Data frame received for 5\nI0607 13:45:06.152462 939 log.go:172] (0xc0001f8a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 13:45:06.153670 939 log.go:172] (0xc000842370) Data frame received for 1\nI0607 13:45:06.153690 939 log.go:172] (0xc0001f8960) (1) Data frame handling\nI0607 13:45:06.153719 939 log.go:172] (0xc0001f8960) (1) Data frame sent\nI0607 13:45:06.153787 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream removed, broadcasting: 1\nI0607 13:45:06.153814 939 log.go:172] (0xc000842370) Go away received\nI0607 13:45:06.154347 939 log.go:172] (0xc000842370) (0xc0001f8960) Stream removed, broadcasting: 1\nI0607 13:45:06.154376 939 log.go:172] (0xc000842370) (0xc0006e0000) Stream removed, broadcasting: 3\nI0607 13:45:06.154389 939 log.go:172] (0xc000842370) (0xc0001f8a00) Stream removed, broadcasting: 5\n"
Jun 7 13:45:06.159: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:45:06.159: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:45:06.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:06.346: INFO: stderr: "I0607 13:45:06.276949 954 log.go:172] (0xc0009ca420) (0xc00010c820) Create stream\nI0607 13:45:06.276999 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream added, broadcasting: 1\nI0607 13:45:06.279363 954 log.go:172] (0xc0009ca420) Reply frame received for 1\nI0607 13:45:06.279420 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Create stream\nI0607 13:45:06.279445 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Stream added, broadcasting: 3\nI0607 13:45:06.280564 954 log.go:172] (0xc0009ca420) Reply frame received for 3\nI0607 13:45:06.280609 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Create stream\nI0607 13:45:06.280627 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Stream added, broadcasting: 5\nI0607 13:45:06.281764 954 log.go:172] (0xc0009ca420) Reply frame received for 5\nI0607 13:45:06.337780 954 log.go:172] (0xc0009ca420) Data frame received for 3\nI0607 13:45:06.337841 954 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0607 13:45:06.337866 954 log.go:172] (0xc0007e4000) (3) Data frame sent\nI0607 13:45:06.337896 954 log.go:172] (0xc0009ca420) Data frame received for 5\nI0607 13:45:06.337917 954 log.go:172] (0xc00010c8c0) (5) Data frame handling\nI0607 13:45:06.337931 954 log.go:172] (0xc00010c8c0) (5) Data frame sent\nI0607 13:45:06.337943 954 log.go:172] (0xc0009ca420) Data frame received for 5\nI0607 13:45:06.337950 954 log.go:172] (0xc00010c8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 13:45:06.337979 954 log.go:172] (0xc0009ca420) Data frame received for 3\nI0607 13:45:06.337990 954 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0607 13:45:06.339897 954 log.go:172] (0xc0009ca420) Data frame received for 1\nI0607 13:45:06.339930 954 log.go:172] (0xc00010c820) (1) Data frame handling\nI0607 13:45:06.339955 954 log.go:172] (0xc00010c820) (1) Data frame sent\nI0607 13:45:06.339974 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream removed, broadcasting: 1\nI0607 13:45:06.339995 954 log.go:172] (0xc0009ca420) Go away received\nI0607 13:45:06.340326 954 log.go:172] (0xc0009ca420) (0xc00010c820) Stream removed, broadcasting: 1\nI0607 13:45:06.340352 954 log.go:172] (0xc0009ca420) (0xc0007e4000) Stream removed, broadcasting: 3\nI0607 13:45:06.340359 954 log.go:172] (0xc0009ca420) (0xc00010c8c0) Stream removed, broadcasting: 5\n"
Jun 7 13:45:06.346: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 7 13:45:06.346: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
Jun 7 13:45:06.349: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:45:06.349: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 7 13:45:06.349: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jun 7 13:45:06.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:45:06.558: INFO: stderr: "I0607 13:45:06.476816 976 log.go:172] (0xc000aea210) (0xc000ae4140) Create stream\nI0607 13:45:06.476892 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream added, broadcasting: 1\nI0607 13:45:06.489776 976 log.go:172] (0xc000aea210) Reply frame received for 1\nI0607 13:45:06.489828 976 log.go:172] (0xc000aea210) (0xc00040a280) Create stream\nI0607 13:45:06.489842 976 log.go:172] (0xc000aea210) (0xc00040a280) Stream added, broadcasting: 3\nI0607 13:45:06.492555 976 log.go:172] (0xc000aea210) Reply frame received for 3\nI0607 13:45:06.492580 976 log.go:172] (0xc000aea210) (0xc000ae4280) Create stream\nI0607 13:45:06.492589 976 log.go:172] (0xc000aea210) (0xc000ae4280) Stream added, broadcasting: 5\nI0607 13:45:06.494270 976 log.go:172] (0xc000aea210) Reply frame received for 5\nI0607 13:45:06.548994 976 log.go:172] (0xc000aea210) Data frame received for 5\nI0607 13:45:06.549018 976 log.go:172] (0xc000ae4280) (5) Data frame handling\nI0607 13:45:06.549026 976 log.go:172] (0xc000ae4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:06.549038 976 log.go:172] (0xc000aea210) Data frame received for 3\nI0607 13:45:06.549047 976 log.go:172] (0xc00040a280) (3) Data frame handling\nI0607 13:45:06.549056 976 log.go:172] (0xc00040a280) (3) Data frame sent\nI0607 13:45:06.549064 976 log.go:172] (0xc000aea210) Data frame received for 3\nI0607 13:45:06.549072 976 log.go:172] (0xc00040a280) (3) Data frame handling\nI0607 13:45:06.549569 976 log.go:172] (0xc000aea210) Data frame received for 5\nI0607 13:45:06.549606 976 log.go:172] (0xc000ae4280) (5) Data frame handling\nI0607 13:45:06.550974 976 log.go:172] (0xc000aea210) Data frame received for 1\nI0607 13:45:06.550995 976 log.go:172] (0xc000ae4140) (1) Data frame handling\nI0607 13:45:06.551003 976 log.go:172] (0xc000ae4140) (1) Data frame sent\nI0607 13:45:06.551015 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream removed, broadcasting: 1\nI0607 13:45:06.551025 976 log.go:172] (0xc000aea210) Go away received\nI0607 13:45:06.551314 976 log.go:172] (0xc000aea210) (0xc000ae4140) Stream removed, broadcasting: 1\nI0607 13:45:06.551327 976 log.go:172] (0xc000aea210) (0xc00040a280) Stream removed, broadcasting: 3\nI0607 13:45:06.551335 976 log.go:172] (0xc000aea210) (0xc000ae4280) Stream removed, broadcasting: 5\n"
Jun 7 13:45:06.558: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:45:06.558: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:45:06.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:45:06.775: INFO: stderr: "I0607 13:45:06.678043 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Create stream\nI0607 13:45:06.678105 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream added, broadcasting: 1\nI0607 13:45:06.680858 996 log.go:172] (0xc0008c6420) Reply frame received for 1\nI0607 13:45:06.680917 996 log.go:172] (0xc0008c6420) (0xc000518780) Create stream\nI0607 13:45:06.680940 996 log.go:172] (0xc0008c6420) (0xc000518780) Stream added, broadcasting: 3\nI0607 13:45:06.682051 996 log.go:172] (0xc0008c6420) Reply frame received for 3\nI0607 13:45:06.682082 996 log.go:172] (0xc0008c6420) (0xc000832000) Create stream\nI0607 13:45:06.682094 996 log.go:172] (0xc0008c6420) (0xc000832000) Stream added, broadcasting: 5\nI0607 13:45:06.682780 996 log.go:172] (0xc0008c6420) Reply frame received for 5\nI0607 13:45:06.738830 996 log.go:172] (0xc0008c6420) Data frame received for 5\nI0607 13:45:06.738858 996 log.go:172] (0xc000832000) (5) Data frame handling\nI0607 13:45:06.738993 996 log.go:172] (0xc000832000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:06.765502 996 log.go:172] (0xc0008c6420) Data frame received for 3\nI0607 13:45:06.765550 996 log.go:172] (0xc000518780) (3) Data frame handling\nI0607 13:45:06.765585 996 log.go:172] (0xc000518780) (3) Data frame sent\nI0607 13:45:06.765604 996 log.go:172] (0xc0008c6420) Data frame received for 3\nI0607 13:45:06.765636 996 log.go:172] (0xc000518780) (3) Data frame handling\nI0607 13:45:06.765817 996 log.go:172] (0xc0008c6420) Data frame received for 5\nI0607 13:45:06.765835 996 log.go:172] (0xc000832000) (5) Data frame handling\nI0607 13:45:06.767600 996 log.go:172] (0xc0008c6420) Data frame received for 1\nI0607 13:45:06.768040 996 log.go:172] (0xc0005186e0) (1) Data frame handling\nI0607 13:45:06.768097 996 log.go:172] (0xc0005186e0) (1) Data frame sent\nI0607 13:45:06.768133 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream removed, broadcasting: 1\nI0607 13:45:06.768423 996 log.go:172] (0xc0008c6420) Go away received\nI0607 13:45:06.768714 996 log.go:172] (0xc0008c6420) (0xc0005186e0) Stream removed, broadcasting: 1\nI0607 13:45:06.768746 996 log.go:172] (0xc0008c6420) (0xc000518780) Stream removed, broadcasting: 3\nI0607 13:45:06.768807 996 log.go:172] (0xc0008c6420) (0xc000832000) Stream removed, broadcasting: 5\n"
Jun 7 13:45:06.775: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:45:06.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:45:06.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 7 13:45:07.024: INFO: stderr: "I0607 13:45:06.913498 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Create stream\nI0607 13:45:06.913548 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream added, broadcasting: 1\nI0607 13:45:06.915898 1019 log.go:172] (0xc0009c4370) Reply frame received for 1\nI0607 13:45:06.915934 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Create stream\nI0607 13:45:06.915950 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Stream added, broadcasting: 3\nI0607 13:45:06.916825 1019 log.go:172] (0xc0009c4370) Reply frame received for 3\nI0607 13:45:06.916856 1019 log.go:172] (0xc0009c4370) (0xc000948780) Create stream\nI0607 13:45:06.916869 1019 log.go:172] (0xc0009c4370) (0xc000948780) Stream added, broadcasting: 5\nI0607 13:45:06.918096 1019 log.go:172] (0xc0009c4370) Reply frame received for 5\nI0607 13:45:06.986060 1019 log.go:172] (0xc0009c4370) Data frame received for 5\nI0607 13:45:06.986084 1019 log.go:172] (0xc000948780) (5) Data frame handling\nI0607 13:45:06.986096 1019 log.go:172] (0xc000948780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0607 13:45:07.017419 1019 log.go:172] (0xc0009c4370) Data frame received for 5\nI0607 13:45:07.017469 1019 log.go:172] (0xc000948780) (5) Data frame handling\nI0607 13:45:07.017497 1019 log.go:172] (0xc0009c4370) Data frame received for 3\nI0607 13:45:07.017510 1019 log.go:172] (0xc0005ea280) (3) Data frame handling\nI0607 13:45:07.017523 1019 log.go:172] (0xc0005ea280) (3) Data frame sent\nI0607 13:45:07.017544 1019 log.go:172] (0xc0009c4370) Data frame received for 3\nI0607 13:45:07.017554 1019 log.go:172] (0xc0005ea280) (3) Data frame handling\nI0607 13:45:07.018747 1019 log.go:172] (0xc0009c4370) Data frame received for 1\nI0607 13:45:07.018826 1019 log.go:172] (0xc0009486e0) (1) Data frame handling\nI0607 13:45:07.018887 1019 log.go:172] (0xc0009486e0) (1) Data frame sent\nI0607 13:45:07.018918 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream removed, broadcasting: 1\nI0607 13:45:07.018941 1019 log.go:172] (0xc0009c4370) Go away received\nI0607 13:45:07.019282 1019 log.go:172] (0xc0009c4370) (0xc0009486e0) Stream removed, broadcasting: 1\nI0607 13:45:07.019300 1019 log.go:172] (0xc0009c4370) (0xc0005ea280) Stream removed, broadcasting: 3\nI0607 13:45:07.019308 1019 log.go:172] (0xc0009c4370) (0xc000948780) Stream removed, broadcasting: 5\n"
Jun 7 13:45:07.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 7 13:45:07.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
Jun 7 13:45:07.024: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:45:07.054: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jun 7 13:45:17.130: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:45:17.130: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:45:17.130: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 7 13:45:17.172: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:17.172: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:17.172: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:17.172: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:17.172: INFO:
Jun 7 13:45:17.172: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:18.396: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:18.396: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:18.396: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:18.396: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:18.396: INFO:
Jun 7 13:45:18.396: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:19.442: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:19.442: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:19.442: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:19.442: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:19.442: INFO:
Jun 7 13:45:19.442: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:20.447: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:20.447: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:20.447: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:20.447: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:20.447: INFO:
Jun 7 13:45:20.447: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:21.534: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:21.534: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:21.534: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:21.534: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:21.534: INFO:
Jun 7 13:45:21.534: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:22.539: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:22.539: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:34 +0000 UTC }]
Jun 7 13:45:22.539: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:22.539: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:22.539: INFO:
Jun 7 13:45:22.539: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 7 13:45:23.587: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:23.587: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:23.587: INFO:
Jun 7 13:45:23.587: INFO: StatefulSet ss has not reached scale 0, at 1
Jun 7 13:45:24.605: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:24.605: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:24.605: INFO:
Jun 7 13:45:24.605: INFO: StatefulSet ss has not reached scale 0, at 1
Jun 7 13:45:25.610: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:25.610: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:25.610: INFO:
Jun 7 13:45:25.610: INFO: StatefulSet ss has not reached scale 0, at 1
Jun 7 13:45:26.614: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 7 13:45:26.615: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 13:44:54 +0000 UTC }]
Jun 7 13:45:26.615: INFO:
Jun 7 13:45:26.615: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4754
Jun 7 13:45:27.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:27.741: INFO: rc: 1
Jun 7 13:45:27.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx")
[] 0xc0027d5890 exit status 1 true [0xc0015320f0 0xc001532108 0xc001532120] [0xc0015320f0 0xc001532108 0xc001532120] [0xc001532100 0xc001532118] [0xba70e0 0xba70e0] 0xc0027cac60 }:
Command stdout:
stderr:
error: unable to upgrade connection: container not found ("nginx")
error:
exit status 1
Jun 7 13:45:37.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:37.837: INFO: rc: 1
Jun 7 13:45:37.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002770090 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c5e060 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:45:47.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:47.938: INFO: rc: 1
Jun 7 13:45:47.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002770180 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc001c5e660 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:45:57.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:45:58.026: INFO: rc: 1
Jun 7 13:45:58.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0027d5980 exit status 1 true [0xc001532128 0xc001532140 0xc001532158] [0xc001532128 0xc001532140 0xc001532158] [0xc001532138 0xc001532150] [0xba70e0 0xba70e0] 0xc0027caf60 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:46:08.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:46:08.122: INFO: rc: 1
Jun 7 13:46:08.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002ab6030 exit status 1 true [0xc000187850 0xc0001878e0 0xc000187a00] [0xc000187850 0xc0001878e0 0xc000187a00] [0xc0001878b0 0xc0001879e0] [0xba70e0 0xba70e0] 0xc002cfab40 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:46:18.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:46:18.220: INFO: rc: 1
Jun 7 13:46:18.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dbad80 exit status 1 true [0xc0005e4fc0 0xc0005e5070 0xc0005e5108] [0xc0005e4fc0 0xc0005e5070 0xc0005e5108] [0xc0005e5010 0xc0005e5100] [0xba70e0 0xba70e0] 0xc002c94240 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:46:28.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:46:28.312: INFO: rc: 1
Jun 7 13:46:28.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dbae40 exit status 1 true [0xc0005e5128 0xc0005e5290 0xc0005e5380] [0xc0005e5128 0xc0005e5290 0xc0005e5380] [0xc0005e5248 0xc0005e5370] [0xba70e0 0xba70e0] 0xc002c94540 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:46:38.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:46:38.458: INFO: rc: 1
Jun 7 13:46:38.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002ab60f0 exit status 1 true [0xc000187a20 0xc000187ad8 0xc000187d98] [0xc000187a20 0xc000187ad8 0xc000187d98] [0xc000187ac0 0xc000187d30] [0xba70e0 0xba70e0] 0xc002cfae40 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:46:48.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:46:55.180: INFO: rc: 1
Jun 7 13:46:55.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002ab61b0 exit status 1 true [0xc000187e10 0xc000187e70 0xc0026e2008] [0xc000187e10 0xc000187e70 0xc0026e2008] [0xc000187e68 0xc0026e2000] [0xba70e0 0xba70e0] 0xc002cfb800 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:05.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:05.293: INFO: rc: 1
Jun 7 13:47:05.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dbaf30 exit status 1 true [0xc0005e5390 0xc0005e5498 0xc0005e5668] [0xc0005e5390 0xc0005e5498 0xc0005e5668] [0xc0005e5410 0xc0005e55d8] [0xba70e0 0xba70e0] 0xc002c94840 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:15.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:15.391: INFO: rc: 1
Jun 7 13:47:15.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001a24060 exit status 1 true [0xc000186f08 0xc0001870c0 0xc000187190] [0xc000186f08 0xc0001870c0 0xc000187190] [0xc000187028 0xc000187158] [0xba70e0 0xba70e0] 0xc00189e300 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:25.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:25.527: INFO: rc: 1
Jun 7 13:47:25.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dba090 exit status 1 true [0xc00053e038 0xc0005e4410 0xc0005e4948] [0xc00053e038 0xc0005e4410 0xc0005e4948] [0xc0005e4388 0xc0005e4900] [0xba70e0 0xba70e0] 0xc0024c0ba0 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:35.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:35.627: INFO: rc: 1
Jun 7 13:47:35.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0027700c0 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c52060 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:45.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:45.735: INFO: rc: 1
Jun 7 13:47:45.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b60f0 exit status 1 true [0xc0026e2000 0xc0026e2018 0xc0026e2030] [0xc0026e2000 0xc0026e2018 0xc0026e2030] [0xc0026e2010 0xc0026e2028] [0xba70e0 0xba70e0] 0xc001f3c900 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:47:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:47:55.862: INFO: rc: 1
Jun 7 13:47:55.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001a24150 exit status 1 true [0xc0001871a0 0xc000187338 0xc0001873c0] [0xc0001871a0 0xc000187338 0xc0001873c0] [0xc0001872c0 0xc0001873a8] [0xba70e0 0xba70e0] 0xc001c5e120 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:05.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:06.001: INFO: rc: 1
Jun 7 13:48:06.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001a24210 exit status 1 true [0xc000187448 0xc000187568 0xc0001875f0] [0xc000187448 0xc000187568 0xc0001875f0] [0xc000187538 0xc000187578] [0xba70e0 0xba70e0] 0xc001c5e9c0 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:16.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:16.092: INFO: rc: 1
Jun 7 13:48:16.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dba150 exit status 1 true [0xc0005e49b0 0xc0005e4ce0 0xc0005e4eb0] [0xc0005e49b0 0xc0005e4ce0 0xc0005e4eb0] [0xc0005e4b30 0xc0005e4ea0] [0xba70e0 0xba70e0] 0xc002c94000 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:26.184: INFO: rc: 1
Jun 7 13:48:26.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dba210 exit status 1 true [0xc0005e4ee8 0xc0005e4ff0 0xc0005e50a8] [0xc0005e4ee8 0xc0005e4ff0 0xc0005e50a8] [0xc0005e4fc0 0xc0005e5070] [0xba70e0 0xba70e0] 0xc002c94300 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:36.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:36.277: INFO: rc: 1
Jun 7 13:48:36.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001dba2d0 exit status 1 true [0xc0005e5100 0xc0005e5180 0xc0005e5300] [0xc0005e5100 0xc0005e5180 0xc0005e5300] [0xc0005e5128 0xc0005e5290] [0xba70e0 0xba70e0] 0xc002c94600 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:46.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:46.368: INFO: rc: 1
Jun 7 13:48:46.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002770210 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc002cfa180 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:48:56.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:48:56.473: INFO: rc: 1
Jun 7 13:48:56.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b6270 exit status 1 true [0xc0026e2038 0xc0026e2050 0xc0026e2068] [0xc0026e2038 0xc0026e2050 0xc0026e2068] [0xc0026e2048 0xc0026e2060] [0xba70e0 0xba70e0] 0xc0027ca3c0 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:06.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:06.678: INFO: rc: 1
Jun 7 13:49:06.678: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b6360 exit status 1 true [0xc0026e2078 0xc0026e20a8 0xc0026e20c8] [0xc0026e2078 0xc0026e20a8 0xc0026e20c8] [0xc0026e20a0 0xc0026e20b8] [0xba70e0 0xba70e0] 0xc0027caa20 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:16.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:16.777: INFO: rc: 1
Jun 7 13:49:16.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001a24300 exit status 1 true [0xc0001876f0 0xc000187710 0xc000187830] [0xc0001876f0 0xc000187710 0xc000187830] [0xc000187700 0xc0001877d8] [0xba70e0 0xba70e0] 0xc001c5f380 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:26.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:26.887: INFO: rc: 1
Jun 7 13:49:26.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b6090 exit status 1 true [0xc00053e038 0xc0026e2010 0xc0026e2028] [0xc00053e038 0xc0026e2010 0xc0026e2028] [0xc0026e2008 0xc0026e2020] [0xba70e0 0xba70e0] 0xc001f3cde0 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:36.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:36.988: INFO: rc: 1
Jun 7 13:49:36.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0027700f0 exit status 1 true [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4010 0xc002bd4050 0xc002bd4098] [0xc002bd4038 0xc002bd4080] [0xba70e0 0xba70e0] 0xc001c532c0 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:46.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:47.089: INFO: rc: 1
Jun 7 13:49:47.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc001a240f0 exit status 1 true [0xc000186000 0xc000187028 0xc000187158] [0xc000186000 0xc000187028 0xc000187158] [0xc000186fc8 0xc000187108] [0xba70e0 0xba70e0] 0xc0024c0000 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:49:57.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:49:57.179: INFO: rc: 1
Jun 7 13:49:57.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0027701e0 exit status 1 true [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40a8 0xc002bd40d0 0xc002bd40e8] [0xc002bd40c8 0xc002bd40e0] [0xba70e0 0xba70e0] 0xc00189f020 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:50:07.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:50:07.272: INFO: rc: 1
Jun 7 13:50:07.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc002770300 exit status 1 true [0xc002bd40f0 0xc002bd4118 0xc002bd4158] [0xc002bd40f0 0xc002bd4118 0xc002bd4158] [0xc002bd4100 0xc002bd4150] [0xba70e0 0xba70e0] 0xc0027ca000 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:50:17.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:50:17.361: INFO: rc: 1
Jun 7 13:50:17.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b61b0 exit status 1 true [0xc0026e2030 0xc0026e2048 0xc0026e2060] [0xc0026e2030 0xc0026e2048 0xc0026e2060] [0xc0026e2040 0xc0026e2058] [0xba70e0 0xba70e0] 0xc002cfa420 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:50:27.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:50:27.443: INFO: rc: 1
Jun 7 13:50:27.444: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found
[] 0xc0028b62d0 exit status 1 true [0xc0026e2068 0xc0026e20a0 0xc0026e20b8] [0xc0026e2068 0xc0026e20a0 0xc0026e20b8] [0xc0026e2090 0xc0026e20b0] [0xba70e0 0xba70e0] 0xc002cfac00 }:
Command stdout:
stderr:
Error from server (NotFound): pods "ss-1" not found
error:
exit status 1
Jun 7 13:50:37.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 7 13:50:37.536: INFO: rc: 1
Jun 7 13:50:37.536: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1:
Jun 7 13:50:37.536: INFO: Scaling statefulset ss to 0
Jun 7 13:50:37.543: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jun 7 13:50:37.544: INFO: Deleting all statefulset in ns statefulset-4754
Jun 7 13:50:37.546: INFO: Scaling statefulset ss to 0
Jun 7 13:50:37.553: INFO: Waiting for statefulset status.replicas updated to 0
Jun 7 13:50:37.554: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:50:37.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4754" for this suite.
Jun 7 13:50:45.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:50:45.767: INFO: namespace statefulset-4754 deletion completed in 8.096389069s
• [SLOW TEST:371.360 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
Burst scaling should run to completion even with unhealthy pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:50:45.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jun 7 13:50:45.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47" in namespace "projected-4519" to be "success or failure"
Jun 7 13:50:45.998: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 51.773275ms
Jun 7 13:50:48.091: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144499633s
Jun 7 13:50:50.304: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357019182s
Jun 7 13:50:52.308: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.361053776s
STEP: Saw pod success
Jun 7 13:50:52.308: INFO: Pod "downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47" satisfied condition "success or failure"
Jun 7 13:50:52.310: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 container client-container:
STEP: delete the pod
Jun 7 13:50:52.375: INFO: Waiting for pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 to disappear
Jun 7 13:50:52.391: INFO: Pod downwardapi-volume-44b631f0-1615-4a2f-9675-819d3f1edf47 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:50:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4519" for this suite.
Jun 7 13:50:58.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:50:58.667: INFO: namespace projected-4519 deletion completed in 6.273116403s
• [SLOW TEST:12.900 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default
should create an rc or deployment from an image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:50:58.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 7 13:50:58.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3677'
Jun 7 13:50:58.916: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 7 13:50:58.916: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jun 7 13:51:01.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3677'
Jun 7 13:51:01.257: INFO: stderr: ""
Jun 7 13:51:01.257: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:51:01.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3677" for this suite.
Jun 7 13:51:23.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:51:23.751: INFO: namespace kubectl-3677 deletion completed in 22.350430702s
• [SLOW TEST:25.083 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Kubectl run default
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should create an rc or deployment from an image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets
should fail to create secret due to empty secret key [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:51:23.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-5f77e3be-ded6-4781-b657-76a0b1cb8347
[AfterEach] [sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:51:23.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7505" for this suite.
Jun 7 13:51:30.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:51:30.090: INFO: namespace secrets-7505 deletion completed in 6.118233779s
• [SLOW TEST:6.339 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
should fail to create secret due to empty secret key [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:51:30.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0607 13:51:42.187462 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 7 13:51:42.187: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:51:42.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1549" for this suite.
Jun 7 13:51:54.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:51:54.353: INFO: namespace gc-1549 deletion completed in 12.103574221s
• [SLOW TEST:24.262 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo
should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:51:54.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jun 7 13:51:54.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1115'
Jun 7 13:51:54.924: INFO: stderr: ""
Jun 7 13:51:54.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 7 13:51:54.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:51:55.105: INFO: stderr: ""
Jun 7 13:51:55.105: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv "
Jun 7 13:51:55.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:51:55.197: INFO: stderr: ""
Jun 7 13:51:55.197: INFO: stdout: ""
Jun 7 13:51:55.197: INFO: update-demo-nautilus-r2fzp is created but not running
Jun 7 13:52:00.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:00.305: INFO: stderr: ""
Jun 7 13:52:00.305: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv "
Jun 7 13:52:00.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:00.406: INFO: stderr: ""
Jun 7 13:52:00.406: INFO: stdout: ""
Jun 7 13:52:00.406: INFO: update-demo-nautilus-r2fzp is created but not running
Jun 7 13:52:05.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:05.578: INFO: stderr: ""
Jun 7 13:52:05.578: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv "
Jun 7 13:52:05.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:05.674: INFO: stderr: ""
Jun 7 13:52:05.674: INFO: stdout: "true"
Jun 7 13:52:05.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r2fzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:05.769: INFO: stderr: ""
Jun 7 13:52:05.769: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:05.769: INFO: validating pod update-demo-nautilus-r2fzp
Jun 7 13:52:05.788: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:05.788: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:05.788: INFO: update-demo-nautilus-r2fzp is verified up and running
Jun 7 13:52:05.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:05.923: INFO: stderr: ""
Jun 7 13:52:05.923: INFO: stdout: "true"
Jun 7 13:52:05.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:06.020: INFO: stderr: ""
Jun 7 13:52:06.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:06.020: INFO: validating pod update-demo-nautilus-v7jdv
Jun 7 13:52:06.052: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:06.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:06.053: INFO: update-demo-nautilus-v7jdv is verified up and running
STEP: scaling down the replication controller
Jun 7 13:52:06.056: INFO: scanned /root for discovery docs:
Jun 7 13:52:06.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1115'
Jun 7 13:52:07.294: INFO: stderr: ""
Jun 7 13:52:07.294: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 7 13:52:07.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:07.397: INFO: stderr: ""
Jun 7 13:52:07.397: INFO: stdout: "update-demo-nautilus-r2fzp update-demo-nautilus-v7jdv "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jun 7 13:52:12.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:12.492: INFO: stderr: ""
Jun 7 13:52:12.492: INFO: stdout: "update-demo-nautilus-v7jdv "
Jun 7 13:52:12.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:12.579: INFO: stderr: ""
Jun 7 13:52:12.579: INFO: stdout: "true"
Jun 7 13:52:12.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:12.676: INFO: stderr: ""
Jun 7 13:52:12.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:12.676: INFO: validating pod update-demo-nautilus-v7jdv
Jun 7 13:52:12.679: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:12.679: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:12.679: INFO: update-demo-nautilus-v7jdv is verified up and running
STEP: scaling up the replication controller
Jun 7 13:52:12.680: INFO: scanned /root for discovery docs:
Jun 7 13:52:12.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1115'
Jun 7 13:52:13.820: INFO: stderr: ""
Jun 7 13:52:13.820: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 7 13:52:13.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:13.916: INFO: stderr: ""
Jun 7 13:52:13.916: INFO: stdout: "update-demo-nautilus-v7jdv update-demo-nautilus-xx2wf "
Jun 7 13:52:13.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:14.023: INFO: stderr: ""
Jun 7 13:52:14.024: INFO: stdout: "true"
Jun 7 13:52:14.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:14.118: INFO: stderr: ""
Jun 7 13:52:14.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:14.118: INFO: validating pod update-demo-nautilus-v7jdv
Jun 7 13:52:14.121: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:14.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:14.121: INFO: update-demo-nautilus-v7jdv is verified up and running
Jun 7 13:52:14.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:14.205: INFO: stderr: ""
Jun 7 13:52:14.205: INFO: stdout: ""
Jun 7 13:52:14.205: INFO: update-demo-nautilus-xx2wf is created but not running
Jun 7 13:52:19.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1115'
Jun 7 13:52:19.299: INFO: stderr: ""
Jun 7 13:52:19.299: INFO: stdout: "update-demo-nautilus-v7jdv update-demo-nautilus-xx2wf "
Jun 7 13:52:19.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:19.388: INFO: stderr: ""
Jun 7 13:52:19.388: INFO: stdout: "true"
Jun 7 13:52:19.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v7jdv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:19.509: INFO: stderr: ""
Jun 7 13:52:19.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:19.510: INFO: validating pod update-demo-nautilus-v7jdv
Jun 7 13:52:19.513: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:19.513: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:19.513: INFO: update-demo-nautilus-v7jdv is verified up and running
Jun 7 13:52:19.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:19.604: INFO: stderr: ""
Jun 7 13:52:19.604: INFO: stdout: "true"
Jun 7 13:52:19.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xx2wf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1115'
Jun 7 13:52:19.701: INFO: stderr: ""
Jun 7 13:52:19.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 7 13:52:19.701: INFO: validating pod update-demo-nautilus-xx2wf
Jun 7 13:52:19.705: INFO: got data: {
"image": "nautilus.jpg"
}
Jun 7 13:52:19.705: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 7 13:52:19.705: INFO: update-demo-nautilus-xx2wf is verified up and running
STEP: using delete to clean up resources
Jun 7 13:52:19.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1115'
Jun 7 13:52:19.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 7 13:52:19.874: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jun 7 13:52:19.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1115'
Jun 7 13:52:19.971: INFO: stderr: "No resources found.\n"
Jun 7 13:52:19.971: INFO: stdout: ""
Jun 7 13:52:19.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1115 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 7 13:52:20.097: INFO: stderr: ""
Jun 7 13:52:20.097: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun 7 13:52:20.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1115" for this suite.
Jun 7 13:52:44.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 7 13:52:44.416: INFO: namespace kubectl-1115 deletion completed in 24.305170839s
• [SLOW TEST:50.063 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
[k8s.io] Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap
should be consumable via environment variable [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun 7 13:52:44.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4212/configmap-test-f807c707-1fb0-4213-a128-a9a3450e6603
STEP: Creating a pod to test consume configMaps
Jun 7 13:52:44.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca" in namespace "configmap-4212" to be "success or failure"
Jun 7 13:52:44.696: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 60.945127ms
Jun 7 13:52:46.701: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065268184s
Jun 7 13:52:48.705: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06934847s
Jun 7 13:52:50.748: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112447103s
Jun 7 13:52:52.752: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116214162s
STEP: Saw pod success
Jun 7 13:52:52.752: INFO: Pod "pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca" satisfied condition "success or failure"
Jun 7 13:52:52.754: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-caa79057-bba6-4bd8-92ec-eb46978028ca container env-test: