I0601 12:55:54.171253 6 e2e.go:243] Starting e2e run "0a15dcd2-06fc-455d-b881-32cae529a883" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591016153 - Will randomize all specs Will run 215 of 4412 specs Jun 1 12:55:54.370: INFO: >>> kubeConfig: /root/.kube/config Jun 1 12:55:54.373: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 1 12:55:54.397: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 1 12:55:54.433: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 1 12:55:54.433: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 1 12:55:54.433: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 1 12:55:54.443: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 1 12:55:54.443: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 1 12:55:54.443: INFO: e2e test version: v1.15.11 Jun 1 12:55:54.444: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 12:55:54.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Jun 1 12:55:54.515: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1746 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1746 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1746 Jun 1 12:55:54.545: INFO: Found 0 stateful pods, waiting for 1 Jun 1 12:56:04.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 1 12:56:04.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 12:56:07.217: INFO: stderr: "I0601 12:56:07.028723 27 log.go:172] (0xc000116e70) (0xc0006e6960) Create stream\nI0601 12:56:07.028789 27 log.go:172] (0xc000116e70) (0xc0006e6960) Stream added, broadcasting: 1\nI0601 12:56:07.032443 27 log.go:172] (0xc000116e70) Reply frame received for 1\nI0601 12:56:07.032494 27 log.go:172] (0xc000116e70) (0xc000a2a000) Create stream\nI0601 12:56:07.032507 27 log.go:172] (0xc000116e70) (0xc000a2a000) Stream added, broadcasting: 3\nI0601 12:56:07.034134 27 log.go:172] (0xc000116e70) Reply frame received for 3\nI0601 12:56:07.034213 27 log.go:172] (0xc000116e70) (0xc0002d2000) Create stream\nI0601 12:56:07.034235 27 log.go:172] (0xc000116e70) (0xc0002d2000) Stream added, broadcasting: 5\nI0601 12:56:07.035428 27 log.go:172] (0xc000116e70) Reply frame received for 5\nI0601 12:56:07.149060 27 log.go:172] (0xc000116e70) Data frame received for 5\nI0601 12:56:07.149289 27 log.go:172] (0xc0002d2000) (5) Data frame handling\nI0601 12:56:07.149406 27 log.go:172] (0xc0002d2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 12:56:07.205899 27 log.go:172] (0xc000116e70) Data frame received for 3\nI0601 12:56:07.205937 27 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0601 12:56:07.205959 27 log.go:172] (0xc000a2a000) (3) Data frame sent\nI0601 12:56:07.206029 27 log.go:172] (0xc000116e70) Data frame received for 3\nI0601 12:56:07.206048 27 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0601 12:56:07.206610 27 log.go:172] (0xc000116e70) Data frame received for 5\nI0601 12:56:07.206634 27 log.go:172] (0xc0002d2000) (5) Data frame handling\nI0601 12:56:07.208511 27 log.go:172] (0xc000116e70) Data frame received for 1\nI0601 12:56:07.208535 27 log.go:172] (0xc0006e6960) (1) Data frame handling\nI0601 12:56:07.208551 27 log.go:172] (0xc0006e6960) (1) Data frame sent\nI0601 12:56:07.208564 27 log.go:172] (0xc000116e70) (0xc0006e6960) Stream removed, broadcasting: 1\nI0601 12:56:07.208580 27 log.go:172] (0xc000116e70) Go away received\nI0601 12:56:07.209049 27 log.go:172] (0xc000116e70) (0xc0006e6960) Stream removed, broadcasting: 1\nI0601 12:56:07.209090 27 log.go:172] (0xc000116e70) (0xc000a2a000) Stream removed, broadcasting: 3\nI0601 12:56:07.209269 27 log.go:172] (0xc000116e70) (0xc0002d2000) Stream removed, broadcasting: 5\n" Jun 1 12:56:07.217: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 12:56:07.217: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 12:56:07.221: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 1 12:56:17.225: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 1 12:56:17.225: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 12:56:17.258: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:17.258: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC }] Jun 1 12:56:17.258: INFO: Jun 1 12:56:17.258: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 1 12:56:18.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973834018s Jun 1 12:56:19.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968374381s Jun 1 12:56:20.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963633067s Jun 1 12:56:21.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.854514134s Jun 1 12:56:22.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.849643551s Jun 1 12:56:23.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.840309034s Jun 1 12:56:24.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.834985039s Jun 1 12:56:25.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.829476029s Jun 1 12:56:26.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 823.563292ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1746 Jun 1 12:56:27.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:56:27.628: INFO: stderr: "I0601 12:56:27.564364 59 log.go:172] (0xc00041c6e0) (0xc000938a00) Create stream\nI0601 12:56:27.564432 59 log.go:172] (0xc00041c6e0) (0xc000938a00) Stream added, broadcasting: 1\nI0601 12:56:27.567361 59 log.go:172] (0xc00041c6e0) Reply frame received for 1\nI0601 12:56:27.567391 59 log.go:172] (0xc00041c6e0) (0xc000938000) Create stream\nI0601 12:56:27.567403 59 log.go:172] (0xc00041c6e0) (0xc000938000) Stream added, broadcasting: 3\nI0601 12:56:27.568238 59 log.go:172] (0xc00041c6e0) Reply frame received for 3\nI0601 12:56:27.568285 59 log.go:172] (0xc00041c6e0) (0xc0007141e0) Create stream\nI0601 12:56:27.568297 59 log.go:172] (0xc00041c6e0) (0xc0007141e0) Stream added, broadcasting: 5\nI0601 12:56:27.569080 59 log.go:172] (0xc00041c6e0) Reply frame received for 5\nI0601 12:56:27.623382 59 log.go:172] (0xc00041c6e0) Data frame received for 3\nI0601 12:56:27.623420 59 log.go:172] (0xc000938000) (3) Data frame handling\nI0601 12:56:27.623436 59 log.go:172] (0xc000938000) (3) Data frame sent\nI0601 12:56:27.623445 59 log.go:172] (0xc00041c6e0) Data frame received for 3\nI0601 12:56:27.623451 59 log.go:172] (0xc000938000) (3) Data frame handling\nI0601 12:56:27.623479 59 log.go:172] (0xc00041c6e0) Data frame received for 5\nI0601 12:56:27.623485 59 log.go:172] (0xc0007141e0) (5) Data frame handling\nI0601 12:56:27.623494 59 log.go:172] (0xc0007141e0) (5) Data frame sent\nI0601 12:56:27.623502 59 log.go:172] (0xc00041c6e0) Data frame received for 5\nI0601 12:56:27.623507 59 log.go:172] (0xc0007141e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 12:56:27.624770 59 log.go:172] (0xc00041c6e0) Data frame received for 1\nI0601 12:56:27.624789 59 log.go:172] (0xc000938a00) (1) Data frame handling\nI0601 12:56:27.624804 59 log.go:172] (0xc000938a00) (1) Data frame sent\nI0601 12:56:27.624819 59 log.go:172] (0xc00041c6e0) (0xc000938a00) Stream removed, broadcasting: 1\nI0601 12:56:27.624851 59 log.go:172] (0xc00041c6e0) Go away received\nI0601 12:56:27.625369 59 log.go:172] (0xc00041c6e0) (0xc000938a00) Stream removed, broadcasting: 1\nI0601 12:56:27.625396 59 log.go:172] (0xc00041c6e0) (0xc000938000) Stream removed, broadcasting: 3\nI0601 12:56:27.625412 59 log.go:172] (0xc00041c6e0) (0xc0007141e0) Stream removed, broadcasting: 5\n" Jun 1 12:56:27.628: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 12:56:27.628: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 12:56:27.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:56:27.862: INFO: stderr: "I0601 12:56:27.755449 80 log.go:172] (0xc0009002c0) (0xc00082e6e0) Create stream\nI0601 12:56:27.755508 80 log.go:172] (0xc0009002c0) (0xc00082e6e0) Stream added, broadcasting: 1\nI0601 12:56:27.763890 80 log.go:172] (0xc0009002c0) Reply frame received for 1\nI0601 12:56:27.763945 80 log.go:172] (0xc0009002c0) (0xc00033e140) Create stream\nI0601 12:56:27.763959 80 log.go:172] (0xc0009002c0) (0xc00033e140) Stream added, broadcasting: 3\nI0601 12:56:27.772700 80 log.go:172] (0xc0009002c0) Reply frame received for 3\nI0601 12:56:27.772745 80 log.go:172] (0xc0009002c0) (0xc00055a000) Create stream\nI0601 12:56:27.772759 80 log.go:172] (0xc0009002c0) (0xc00055a000) Stream added, broadcasting: 5\nI0601 12:56:27.774045 80 log.go:172] (0xc0009002c0) Reply frame received for 5\nI0601 12:56:27.852298 80 log.go:172] (0xc0009002c0) Data frame received for 5\nI0601 12:56:27.852328 80 log.go:172] (0xc00055a000) (5) Data frame handling\nI0601 12:56:27.852345 80 log.go:172] (0xc00055a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 12:56:27.854053 80 log.go:172] (0xc0009002c0) Data frame received for 3\nI0601 12:56:27.854095 80 log.go:172] (0xc00033e140) (3) Data frame handling\nI0601 12:56:27.854120 80 log.go:172] (0xc00033e140) (3) Data frame sent\nI0601 12:56:27.854147 80 log.go:172] (0xc0009002c0) Data frame received for 5\nI0601 12:56:27.854192 80 log.go:172] (0xc00055a000) (5) Data frame handling\nI0601 12:56:27.854224 80 log.go:172] (0xc00055a000) (5) Data frame sent\nI0601 12:56:27.854250 80 log.go:172] (0xc0009002c0) Data frame received for 5\nI0601 12:56:27.854270 80 log.go:172] (0xc00055a000) (5) Data frame handling\nI0601 12:56:27.854299 80 log.go:172] (0xc0009002c0) Data frame received for 3\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0601 12:56:27.854324 80 log.go:172] (0xc00033e140) (3) Data frame handling\nI0601 12:56:27.854373 80 log.go:172] (0xc00055a000) (5) Data frame sent\nI0601 12:56:27.854388 80 log.go:172] (0xc0009002c0) Data frame received for 5\nI0601 12:56:27.854399 80 log.go:172] (0xc00055a000) (5) Data frame handling\nI0601 12:56:27.856091 80 log.go:172] (0xc0009002c0) Data frame received for 1\nI0601 12:56:27.856112 80 log.go:172] (0xc00082e6e0) (1) Data frame handling\nI0601 12:56:27.856128 80 log.go:172] (0xc00082e6e0) (1) Data frame sent\nI0601 12:56:27.856140 80 log.go:172] (0xc0009002c0) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0601 12:56:27.856155 80 log.go:172] (0xc0009002c0) Go away received\nI0601 12:56:27.856646 80 log.go:172] (0xc0009002c0) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0601 12:56:27.856663 80 log.go:172] (0xc0009002c0) (0xc00033e140) Stream removed, broadcasting: 3\nI0601 12:56:27.856671 80 log.go:172] (0xc0009002c0) (0xc00055a000) Stream removed, broadcasting: 5\n" Jun 1 12:56:27.862: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 12:56:27.862: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 12:56:27.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:56:28.075: INFO: stderr: "I0601 12:56:28.000429 102 log.go:172] (0xc000726c60) (0xc000658b40) Create stream\nI0601 12:56:28.000500 102 log.go:172] (0xc000726c60) (0xc000658b40) Stream added, broadcasting: 1\nI0601 12:56:28.003873 102 log.go:172] (0xc000726c60) Reply frame received for 1\nI0601 12:56:28.003915 102 log.go:172] (0xc000726c60) (0xc000658be0) Create stream\nI0601 12:56:28.003927 102 log.go:172] (0xc000726c60) (0xc000658be0) Stream added, broadcasting: 3\nI0601 12:56:28.004750 102 log.go:172] (0xc000726c60) Reply frame received for 3\nI0601 12:56:28.004785 102 log.go:172] (0xc000726c60) (0xc000658c80) Create stream\nI0601 12:56:28.004795 102 log.go:172] (0xc000726c60) (0xc000658c80) Stream added, broadcasting: 5\nI0601 12:56:28.005968 102 log.go:172] (0xc000726c60) Reply frame received for 5\nI0601 12:56:28.068697 102 log.go:172] (0xc000726c60) Data frame received for 3\nI0601 12:56:28.068794 102 log.go:172] (0xc000658be0) (3) Data frame handling\nI0601 12:56:28.068817 102 log.go:172] (0xc000658be0) (3) Data frame sent\nI0601 12:56:28.068836 102 log.go:172] (0xc000726c60) Data frame received for 3\nI0601 12:56:28.068871 102 log.go:172] (0xc000658be0) (3) Data frame handling\nI0601 12:56:28.069429 102 log.go:172] (0xc000726c60) Data frame received for 5\nI0601 12:56:28.069449 102 log.go:172] (0xc000658c80) (5) Data frame handling\nI0601 12:56:28.069458 102 log.go:172] (0xc000658c80) (5) Data frame sent\nI0601 12:56:28.069467 102 log.go:172] (0xc000726c60) Data frame received for 5\nI0601 12:56:28.069478 102 log.go:172] (0xc000658c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0601 12:56:28.070708 102 log.go:172] (0xc000726c60) Data frame received for 1\nI0601 12:56:28.070733 102 log.go:172] (0xc000658b40) (1) Data frame handling\nI0601 12:56:28.070744 102 log.go:172] (0xc000658b40) (1) Data frame sent\nI0601 12:56:28.070819 102 log.go:172] (0xc000726c60) (0xc000658b40) Stream removed, broadcasting: 1\nI0601 12:56:28.070842 102 log.go:172] (0xc000726c60) Go away received\nI0601 12:56:28.071529 102 log.go:172] (0xc000726c60) (0xc000658b40) Stream removed, broadcasting: 1\nI0601 12:56:28.071550 102 log.go:172] (0xc000726c60) (0xc000658be0) Stream removed, broadcasting: 3\nI0601 12:56:28.071561 102 log.go:172] (0xc000726c60) (0xc000658c80) Stream removed, broadcasting: 5\n" Jun 1 12:56:28.076: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 12:56:28.076: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 12:56:28.079: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 1 12:56:38.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 12:56:38.096: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 12:56:38.096: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 1 12:56:38.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 12:56:38.303: INFO: stderr: "I0601 12:56:38.229727 123 log.go:172] (0xc000816420) (0xc000380820) Create stream\nI0601 12:56:38.229805 123 log.go:172] (0xc000816420) (0xc000380820) Stream added, broadcasting: 1\nI0601 12:56:38.232071 123 log.go:172] (0xc000816420) Reply frame received for 1\nI0601 12:56:38.232116 123 log.go:172] (0xc000816420) (0xc000906000) Create stream\nI0601 12:56:38.232135 123 log.go:172] (0xc000816420) (0xc000906000) Stream added, broadcasting: 3\nI0601 12:56:38.233331 123 log.go:172] (0xc000816420) Reply frame received for 3\nI0601 12:56:38.233374 123 log.go:172] (0xc000816420) (0xc000688500) Create stream\nI0601 12:56:38.233388 123 log.go:172] (0xc000816420) (0xc000688500) Stream added, broadcasting: 5\nI0601 12:56:38.234458 123 log.go:172] (0xc000816420) Reply frame received for 5\nI0601 12:56:38.296923 123 log.go:172] (0xc000816420) Data frame received for 3\nI0601 12:56:38.296948 123 log.go:172] (0xc000906000) (3) Data frame handling\nI0601 12:56:38.296956 123 log.go:172] (0xc000906000) (3) Data frame sent\nI0601 12:56:38.296962 123 log.go:172] (0xc000816420) Data frame received for 3\nI0601 12:56:38.297001 123 log.go:172] (0xc000816420) Data frame received for 5\nI0601 12:56:38.297060 123 log.go:172] (0xc000688500) (5) Data frame handling\nI0601 12:56:38.297092 123 log.go:172] (0xc000688500) (5) Data frame sent\nI0601 12:56:38.297103 123 log.go:172] (0xc000816420) Data frame received for 5\nI0601 12:56:38.297305 123 log.go:172] (0xc000688500) (5) Data frame handling\nI0601 12:56:38.297325 123 log.go:172] (0xc000906000) (3) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 12:56:38.298689 123 log.go:172] (0xc000816420) Data frame received for 1\nI0601 12:56:38.298707 123 log.go:172] (0xc000380820) (1) Data frame handling\nI0601 12:56:38.298717 123 log.go:172] (0xc000380820) (1) Data frame sent\nI0601 12:56:38.298746 123 log.go:172] (0xc000816420) (0xc000380820) Stream removed, broadcasting: 1\nI0601 12:56:38.298895 123 log.go:172] (0xc000816420) Go away received\nI0601 12:56:38.299113 123 log.go:172] (0xc000816420) (0xc000380820) Stream removed, broadcasting: 1\nI0601 12:56:38.299130 123 log.go:172] (0xc000816420) (0xc000906000) Stream removed, broadcasting: 3\nI0601 12:56:38.299144 123 log.go:172] (0xc000816420) (0xc000688500) Stream removed, broadcasting: 5\n" Jun 1 12:56:38.303: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 12:56:38.303: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 12:56:38.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 12:56:38.544: INFO: stderr: "I0601 12:56:38.429790 143 log.go:172] (0xc0008782c0) (0xc0008fa6e0) Create stream\nI0601 12:56:38.429843 143 log.go:172] (0xc0008782c0) (0xc0008fa6e0) Stream added, broadcasting: 1\nI0601 12:56:38.432618 143 log.go:172] (0xc0008782c0) Reply frame received for 1\nI0601 12:56:38.432676 143 log.go:172] (0xc0008782c0) (0xc0008fa780) Create stream\nI0601 12:56:38.432690 143 log.go:172] (0xc0008782c0) (0xc0008fa780) Stream added, broadcasting: 3\nI0601 12:56:38.434031 143 log.go:172] (0xc0008782c0) Reply frame received for 3\nI0601 12:56:38.434074 143 log.go:172] (0xc0008782c0) (0xc0006ac320) Create stream\nI0601 12:56:38.434083 143 log.go:172] (0xc0008782c0) (0xc0006ac320) Stream added, broadcasting: 5\nI0601 12:56:38.435104 143 log.go:172] (0xc0008782c0) Reply frame received for 5\nI0601 12:56:38.502539 143 log.go:172] (0xc0008782c0) Data frame received for 5\nI0601 12:56:38.502567 143 log.go:172] (0xc0006ac320) (5) Data frame handling\nI0601 12:56:38.502586 143 log.go:172] (0xc0006ac320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 12:56:38.536371 143 log.go:172] (0xc0008782c0) Data frame received for 3\nI0601 12:56:38.536397 143 log.go:172] (0xc0008fa780) (3) Data frame handling\nI0601 12:56:38.536409 143 log.go:172] (0xc0008fa780) (3) Data frame sent\nI0601 12:56:38.536414 143 log.go:172] (0xc0008782c0) Data frame received for 3\nI0601 12:56:38.536418 143 log.go:172] (0xc0008fa780) (3) Data frame handling\nI0601 12:56:38.536611 143 log.go:172] (0xc0008782c0) Data frame received for 5\nI0601 12:56:38.536628 143 log.go:172] (0xc0006ac320) (5) Data frame handling\nI0601 12:56:38.538895 143 log.go:172] (0xc0008782c0) Data frame received for 1\nI0601 12:56:38.538910 143 log.go:172] (0xc0008fa6e0) (1) Data frame handling\nI0601 12:56:38.538915 143 log.go:172] (0xc0008fa6e0) (1) Data frame sent\nI0601 12:56:38.538923 143 log.go:172] (0xc0008782c0) (0xc0008fa6e0) Stream removed, broadcasting: 1\nI0601 12:56:38.538933 143 log.go:172] (0xc0008782c0) Go away received\nI0601 12:56:38.539404 143 log.go:172] (0xc0008782c0) (0xc0008fa6e0) Stream removed, broadcasting: 1\nI0601 12:56:38.539444 143 log.go:172] (0xc0008782c0) (0xc0008fa780) Stream removed, broadcasting: 3\nI0601 12:56:38.539461 143 log.go:172] (0xc0008782c0) (0xc0006ac320) Stream removed, broadcasting: 5\n" Jun 1 12:56:38.544: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 12:56:38.544: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 12:56:38.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 12:56:38.796: INFO: stderr: "I0601 12:56:38.676586 165 log.go:172] (0xc0008ce420) (0xc00065c780) Create stream\nI0601 12:56:38.676639 165 log.go:172] (0xc0008ce420) (0xc00065c780) Stream added, broadcasting: 1\nI0601 12:56:38.680784 165 log.go:172] (0xc0008ce420) Reply frame received for 1\nI0601 12:56:38.681063 165 log.go:172] (0xc0008ce420) (0xc00065c000) Create stream\nI0601 12:56:38.681077 165 log.go:172] (0xc0008ce420) (0xc00065c000) Stream added, broadcasting: 3\nI0601 12:56:38.682289 165 log.go:172] (0xc0008ce420) Reply frame received for 3\nI0601 12:56:38.682328 165 log.go:172] (0xc0008ce420) (0xc0005f8280) Create stream\nI0601 12:56:38.682340 165 log.go:172] (0xc0008ce420) (0xc0005f8280) Stream added, broadcasting: 5\nI0601 12:56:38.683164 165 log.go:172] (0xc0008ce420) Reply frame received for 5\nI0601 12:56:38.747824 165 log.go:172] (0xc0008ce420) Data frame received for 5\nI0601 12:56:38.747855 165 log.go:172] (0xc0005f8280) (5) Data frame handling\nI0601 12:56:38.747871 165 log.go:172] (0xc0005f8280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 12:56:38.788305 165 log.go:172] (0xc0008ce420) Data frame received for 3\nI0601 12:56:38.788343 165 log.go:172] (0xc00065c000) (3) Data frame handling\nI0601 12:56:38.788366 165 log.go:172] (0xc00065c000) (3) Data frame sent\nI0601 12:56:38.788600 165 log.go:172] (0xc0008ce420) Data frame received for 3\nI0601 12:56:38.788666 165 log.go:172] (0xc00065c000) (3) Data frame handling\nI0601 12:56:38.788756 165 log.go:172] (0xc0008ce420) Data frame received for 5\nI0601 12:56:38.788799 165 log.go:172] (0xc0005f8280) (5) Data frame handling\nI0601 12:56:38.791171 165 log.go:172] (0xc0008ce420) Data frame received for 1\nI0601 12:56:38.791228 165 log.go:172] (0xc00065c780) (1) Data frame handling\nI0601 12:56:38.791253 165 log.go:172] (0xc00065c780) (1) Data frame sent\nI0601 12:56:38.791277 165 log.go:172] (0xc0008ce420) (0xc00065c780) Stream removed, broadcasting: 1\nI0601 12:56:38.791316 165 log.go:172] (0xc0008ce420) Go away received\nI0601 12:56:38.791761 165 log.go:172] (0xc0008ce420) (0xc00065c780) Stream removed, broadcasting: 1\nI0601 12:56:38.791782 165 log.go:172] (0xc0008ce420) (0xc00065c000) Stream removed, broadcasting: 3\nI0601 12:56:38.791798 165 log.go:172] (0xc0008ce420) (0xc0005f8280) Stream removed, broadcasting: 5\n" Jun 1 12:56:38.796: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 12:56:38.796: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 12:56:38.796: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 12:56:38.799: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 1 12:56:48.809: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 1 12:56:48.809: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 1 12:56:48.809: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 1 12:56:48.823: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:48.823: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC }] Jun 1 12:56:48.823: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:48.824: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:48.824: INFO: Jun 1 12:56:48.824: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 1 12:56:49.828: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:49.829: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC }] Jun 1 12:56:49.829: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:49.829: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:49.829: INFO: Jun 1 12:56:49.829: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 1 12:56:50.838: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:50.838: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC }] Jun 1 12:56:50.838: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:50.838: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:50.838: INFO: Jun 1 12:56:50.838: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 1 12:56:51.843: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:51.843: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:55:54 +0000 UTC }] Jun 1 12:56:51.843: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:51.843: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:51.843: INFO: Jun 1 12:56:51.843: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 1 12:56:52.848: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:52.848: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:52.848: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:52.848: INFO: Jun 1 12:56:52.848: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 1 12:56:53.852: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:53.852: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:53.852: INFO: Jun 1 12:56:53.852: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 1 12:56:54.857: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:54.857: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:54.857: INFO: Jun 1 12:56:54.857: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 1 12:56:55.861: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:55.861: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:55.861: INFO: Jun 1 12:56:55.861: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 1 12:56:56.865: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:56.865: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:56.865: INFO: Jun 1 12:56:56.865: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 1 12:56:57.871: INFO: POD NODE PHASE GRACE CONDITIONS Jun 1 12:56:57.871: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 12:56:17 +0000 UTC }] Jun 1 12:56:57.871: INFO: Jun 1 12:56:57.871: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1746 Jun 1 12:56:58.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:56:59.012: INFO: rc: 1 Jun 1 12:56:59.012: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ff05a0 exit status 1 true [0xc002b65b80 0xc002b65b98 0xc002b65bb0] [0xc002b65b80 0xc002b65b98 0xc002b65bb0] [0xc002b65b90 0xc002b65ba8] [0xba70e0 0xba70e0] 0xc00248d980 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 1 12:57:09.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:09.108: INFO: rc: 1 Jun 1 12:57:09.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001ff0660 exit status 1 true [0xc002b65bb8 0xc002b65bd0 0xc002b65be8] [0xc002b65bb8 0xc002b65bd0 0xc002b65be8] [0xc002b65bc8 0xc002b65be0] [0xba70e0 0xba70e0] 0xc00248dc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:57:19.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:19.204: INFO: rc: 1 Jun 1 12:57:19.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001ff0720 exit status 1 true [0xc002b65bf0 0xc002b65c08 0xc002b65c20] [0xc002b65bf0 0xc002b65c08 0xc002b65c20] [0xc002b65c00 0xc002b65c18] [0xba70e0 0xba70e0] 0xc00248df80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:57:29.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:29.308: INFO: rc: 1 Jun 1 12:57:29.308: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001ff0810 exit status 1 true [0xc002b65c28 0xc002b65c40 0xc002b65c58] [0xc002b65c28 0xc002b65c40 0xc002b65c58] [0xc002b65c38 0xc002b65c50] [0xba70e0 0xba70e0] 0xc0020a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:57:39.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:39.407: INFO: rc: 1 Jun 1 12:57:39.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001f56960 exit status 1 true [0xc000945e28 0xc000945e40 0xc000945e58] [0xc000945e28 0xc000945e40 0xc000945e58] [0xc000945e38 0xc000945e50] [0xba70e0 0xba70e0] 0xc001f723c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:57:49.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:49.513: INFO: rc: 1 Jun 1 12:57:49.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0012590b0 exit status 1 true [0xc00160c240 0xc00160c258 0xc00160c270] [0xc00160c240 0xc00160c258 0xc00160c270] [0xc00160c250 0xc00160c268] [0xba70e0 0xba70e0] 0xc001f41980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:57:59.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:57:59.609: INFO: rc: 1 Jun 1 12:57:59.609: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a78090 exit status 1 true [0xc002a50020 0xc002a50038 0xc002a50050] [0xc002a50020 0xc002a50038 0xc002a50050] [0xc002a50030 0xc002a50048] [0xba70e0 0xba70e0] 0xc002de7140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:58:09.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:58:09.713: INFO: rc: 1 Jun 1 12:58:09.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a6810 exit status 1 true [0xc00181a010 0xc00181a158 0xc00181a268] [0xc00181a010 0xc00181a158 0xc00181a268] [0xc00181a108 0xc00181a238] [0xba70e0 0xba70e0] 0xc0020026c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:58:19.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:58:19.812: INFO: rc: 1 Jun 1 12:58:19.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a78180 exit status 1 true [0xc002a50058 0xc002a50090 0xc002a500e0] [0xc002a50058 0xc002a50090 0xc002a500e0] [0xc002a50070 0xc002a500c8] [0xba70e0 0xba70e0] 0xc002de74a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:58:29.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:58:29.907: INFO: rc: 1 Jun 1 12:58:29.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979260 exit status 1 true [0xc000010080 0xc0000103c0 0xc000010420] [0xc000010080 0xc0000103c0 0xc000010420] [0xc000010340 0xc000010400] [0xba70e0 0xba70e0] 0xc002140780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:58:39.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:58:40.002: INFO: rc: 1 Jun 1 12:58:40.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979320 exit status 1 true [0xc000010540 0xc0000105c0 0xc000010688] [0xc000010540 0xc0000105c0 0xc000010688] [0xc0000105b0 0xc000010630] [0xba70e0 0xba70e0] 0xc002140d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:58:50.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:58:50.090: INFO: rc: 1 Jun 1 12:58:50.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a68d0 exit status 1 true [0xc00181a288 0xc00181a300 0xc00181a370] [0xc00181a288 0xc00181a300 0xc00181a370] [0xc00181a2a8 0xc00181a358] [0xba70e0 0xba70e0] 0xc002002c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:00.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:00.191: INFO: rc: 1 Jun 1 12:59:00.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979410 exit status 1 true [0xc0000106e0 0xc000010738 0xc0000107b8] [0xc0000106e0 0xc000010738 0xc0000107b8] [0xc000010710 0xc000010778] [0xba70e0 0xba70e0] 0xc002141260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:10.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:10.286: INFO: rc: 1 Jun 1 12:59:10.286: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0009794d0 exit status 1 true [0xc0000107c8 0xc0000107f8 0xc000010840] [0xc0000107c8 0xc0000107f8 0xc000010840] [0xc0000107e8 0xc000010830] [0xba70e0 0xba70e0] 0xc002141800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:20.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:20.377: INFO: rc: 1 Jun 1 12:59:20.377: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a69f0 exit status 1 true [0xc00181a380 0xc00181a478 0xc00181a520] [0xc00181a380 0xc00181a478 0xc00181a520] [0xc00181a460 0xc00181a4e0] [0xba70e0 0xba70e0] 0xc002003140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:30.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:30.470: INFO: rc: 1 Jun 1 12:59:30.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979590 exit status 1 true [0xc000010848 0xc000010888 0xc0000108f8] [0xc000010848 0xc000010888 0xc0000108f8] [0xc000010860 0xc0000108c0] [0xba70e0 0xba70e0] 0xc002141da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:40.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:40.568: INFO: rc: 1 Jun 1 12:59:40.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a6ab0 exit status 1 true [0xc00181a580 0xc00181a650 0xc00181a6a8] [0xc00181a580 0xc00181a650 0xc00181a6a8] [0xc00181a640 0xc00181a698] [0xba70e0 0xba70e0] 0xc0020037a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 12:59:50.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 12:59:50.665: INFO: rc: 1 Jun 1 12:59:50.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a6ba0 exit status 1 true [0xc00181a6d0 0xc00181a778 0xc00181a800] [0xc00181a6d0 0xc00181a778 0xc00181a800] [0xc00181a6f8 0xc00181a7c8] [0xba70e0 0xba70e0] 0xc002288720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:00.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:00.757: INFO: rc: 1 Jun 1 13:00:00.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a780c0 exit status 1 true [0xc002a50018 0xc002a50030 0xc002a50048] [0xc002a50018 0xc002a50030 0xc002a50048] [0xc002a50028 0xc002a50040] [0xba70e0 0xba70e0] 0xc0020026c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:10.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:10.859: INFO: rc: 1 Jun 1 13:00:10.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979230 exit status 1 true [0xc000010080 0xc0000103c0 0xc000010420] [0xc000010080 0xc0000103c0 0xc000010420] [0xc000010340 0xc000010400] [0xba70e0 0xba70e0] 0xc002de7140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:20.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:20.961: INFO: rc: 1 Jun 1 13:00:20.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979380 exit status 1 true [0xc000010540 0xc0000105c0 0xc000010688] [0xc000010540 0xc0000105c0 0xc000010688] [0xc0000105b0 0xc000010630] [0xba70e0 0xba70e0] 0xc002de74a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:30.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:31.064: INFO: rc: 1 Jun 1 13:00:31.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0009794a0 exit status 1 true [0xc0000106e0 0xc000010738 0xc0000107b8] [0xc0000106e0 0xc000010738 0xc0000107b8] [0xc000010710 0xc000010778] [0xba70e0 0xba70e0] 0xc002de77a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:41.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:41.158: INFO: rc: 1 Jun 1 13:00:41.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0009795c0 exit status 1 true [0xc0000107c8 0xc0000107f8 0xc000010840] [0xc0000107c8 0xc0000107f8 0xc000010840] [0xc0000107e8 0xc000010830] [0xba70e0 0xba70e0] 0xc002de7aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:00:51.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:00:51.254: INFO: rc: 1 Jun 1 13:00:51.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000f56060 exit status 1 true [0xc00181a010 0xc00181a158 0xc00181a268] [0xc00181a010 0xc00181a158 0xc00181a268] [0xc00181a108 0xc00181a238] [0xba70e0 0xba70e0] 0xc002140780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:01.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:01.350: INFO: rc: 1 Jun 1 13:01:01.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000f56150 exit status 1 true [0xc00181a288 0xc00181a300 0xc00181a370] [0xc00181a288 0xc00181a300 0xc00181a370] [0xc00181a2a8 0xc00181a358] [0xba70e0 0xba70e0] 0xc002140de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:11.457: INFO: rc: 1 Jun 1 13:01:11.457: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979680 exit status 1 true [0xc000010848 0xc000010888 0xc0000108f8] [0xc000010848 0xc000010888 0xc0000108f8] [0xc000010860 0xc0000108c0] [0xba70e0 0xba70e0] 0xc002de7da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:21.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:21.558: INFO: rc: 1 Jun 1 13:01:21.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979740 exit status 1 true [0xc000010900 0xc000010920 0xc000010938] [0xc000010900 0xc000010920 0xc000010938] [0xc000010918 0xc000010930] [0xba70e0 0xba70e0] 0xc002288a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:31.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:31.673: INFO: rc: 1 Jun 1 13:01:31.673: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a781e0 exit status 1 true [0xc002a50058 0xc002a50090 0xc002a500e0] [0xc002a50058 0xc002a50090 0xc002a500e0] [0xc002a50070 0xc002a500c8] [0xba70e0 0xba70e0] 0xc002002de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:41.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:41.774: INFO: rc: 1 Jun 1 13:01:41.774: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000979830 exit status 1 true [0xc000010968 0xc0000109a8 0xc0000109c0] [0xc000010968 0xc0000109a8 0xc0000109c0] [0xc000010998 0xc0000109b8] [0xba70e0 0xba70e0] 0xc0022895c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:01:51.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:01:51.871: INFO: rc: 1 Jun 1 13:01:51.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003a6930 exit status 1 true [0xc00103e090 0xc00103e250 0xc00103e3d0] [0xc00103e090 0xc00103e250 0xc00103e3d0] [0xc00103e210 0xc00103e398] [0xba70e0 0xba70e0] 0xc002342ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 1 13:02:01.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:02:01.969: INFO: rc: 1 Jun 1 13:02:01.969: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Jun 1 13:02:01.969: INFO: Scaling statefulset ss to 0 Jun 1 13:02:01.976: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 1 13:02:01.978: INFO: Deleting all statefulset in ns statefulset-1746 Jun 1 13:02:01.980: INFO: Scaling statefulset ss to 0 Jun 1 13:02:01.987: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 13:02:01.989: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:02:02.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1746" for this suite. Jun 1 13:02:08.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:02:08.102: INFO: namespace statefulset-1746 deletion completed in 6.096205236s • [SLOW TEST:373.658 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:02:08.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:02:08.175: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 1 13:02:08.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:08.227: INFO: Number of nodes with available pods: 0 Jun 1 13:02:08.227: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:02:09.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:09.234: INFO: Number of nodes with available pods: 0 Jun 1 13:02:09.234: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:02:10.234: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:10.237: INFO: Number of nodes with available pods: 0 Jun 1 13:02:10.237: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:02:11.232: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:11.235: INFO: Number of nodes with available pods: 0 Jun 1 13:02:11.235: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:02:12.232: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:12.235: INFO: Number of nodes with available pods: 1 Jun 1 13:02:12.235: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:02:13.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:13.235: INFO: Number of nodes with available pods: 2 Jun 1 13:02:13.235: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 1 13:02:13.292: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:13.292: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:13.299: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:14.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:14.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:14.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:15.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:15.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:15.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:16.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:16.303: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:16.303: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:16.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:17.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:17.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:17.304: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:17.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:18.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:18.303: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:18.303: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:18.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:19.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:19.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:19.304: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:19.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:20.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:20.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:20.304: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:20.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:21.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:21.304: INFO: Wrong image for pod: daemon-set-tph86. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:21.304: INFO: Pod daemon-set-tph86 is not available Jun 1 13:02:21.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:22.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:22.303: INFO: Pod daemon-set-qz4hh is not available Jun 1 13:02:22.306: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:23.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:23.304: INFO: Pod daemon-set-qz4hh is not available Jun 1 13:02:23.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:24.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:24.304: INFO: Pod daemon-set-qz4hh is not available Jun 1 13:02:24.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:25.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:25.304: INFO: Pod daemon-set-qz4hh is not available Jun 1 13:02:25.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:26.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:26.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:27.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:27.303: INFO: Pod daemon-set-5npvb is not available Jun 1 13:02:27.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:28.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:28.304: INFO: Pod daemon-set-5npvb is not available Jun 1 13:02:28.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:29.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:29.304: INFO: Pod daemon-set-5npvb is not available Jun 1 13:02:29.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:30.303: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:30.303: INFO: Pod daemon-set-5npvb is not available Jun 1 13:02:30.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:31.304: INFO: Wrong image for pod: daemon-set-5npvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 1 13:02:31.304: INFO: Pod daemon-set-5npvb is not available Jun 1 13:02:31.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:32.304: INFO: Pod daemon-set-b8wms is not available Jun 1 13:02:32.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 1 13:02:32.311: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:32.314: INFO: Number of nodes with available pods: 1 Jun 1 13:02:32.314: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:02:33.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:33.323: INFO: Number of nodes with available pods: 1 Jun 1 13:02:33.323: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:02:34.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:34.366: INFO: Number of nodes with available pods: 1 Jun 1 13:02:34.366: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:02:35.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:35.327: INFO: Number of nodes with available pods: 1 Jun 1 13:02:35.327: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:02:36.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:02:36.324: INFO: Number of nodes with available pods: 2 Jun 1 13:02:36.324: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9227, will wait for the garbage collector to delete the pods Jun 1 13:02:36.403: INFO: Deleting DaemonSet.extensions daemon-set took: 11.066205ms Jun 1 13:02:36.704: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278826ms Jun 1 13:02:42.308: INFO: Number of nodes with available pods: 0 Jun 1 13:02:42.308: INFO: Number of running nodes: 0, number of available pods: 0 Jun 1 13:02:42.311: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9227/daemonsets","resourceVersion":"14079211"},"items":null} Jun 1 13:02:42.315: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9227/pods","resourceVersion":"14079211"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:02:42.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9227" for this suite. Jun 1 13:02:48.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:02:48.440: INFO: namespace daemonsets-9227 deletion completed in 6.09338762s • [SLOW TEST:40.338 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:02:48.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 1 13:02:48.536: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:02:48.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-986" for this suite. Jun 1 13:02:54.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:02:54.716: INFO: namespace kubectl-986 deletion completed in 6.094433657s • [SLOW TEST:6.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:02:54.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f3a9be04-f7b6-43e6-bdd6-1a3fb6f7b7cd STEP: Creating a pod to test consume configMaps Jun 1 13:02:54.900: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6" in namespace "projected-6942" to be "success or failure" Jun 1 13:02:54.928: INFO: Pod "pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.633507ms Jun 1 13:02:57.018: INFO: Pod "pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118781205s Jun 1 13:02:59.023: INFO: Pod "pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12340072s STEP: Saw pod success Jun 1 13:02:59.023: INFO: Pod "pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6" satisfied condition "success or failure" Jun 1 13:02:59.026: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6 container projected-configmap-volume-test: STEP: delete the pod Jun 1 13:02:59.049: INFO: Waiting for pod pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6 to disappear Jun 1 13:02:59.054: INFO: Pod pod-projected-configmaps-388116e0-4c53-4053-833b-25a1246582e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:02:59.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6942" for this suite. Jun 1 13:03:05.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:03:05.194: INFO: namespace projected-6942 deletion completed in 6.093936191s • [SLOW TEST:10.478 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:03:05.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:03:09.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9216" for this suite. Jun 1 13:03:55.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:03:55.468: INFO: namespace kubelet-test-9216 deletion completed in 46.122821285s • [SLOW TEST:50.273 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:03:55.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-s4z5 STEP: Creating a pod to test atomic-volume-subpath Jun 1 13:03:55.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-s4z5" in namespace "subpath-7044" to be "success or failure" Jun 1 13:03:55.594: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965493ms Jun 1 13:03:57.600: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010094211s Jun 1 13:03:59.605: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 4.015147581s Jun 1 13:04:01.642: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 6.052269s Jun 1 13:04:03.647: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 8.056916939s Jun 1 13:04:05.652: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 10.061762763s Jun 1 13:04:07.656: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 12.065995027s Jun 1 13:04:09.660: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 14.07041945s Jun 1 13:04:11.664: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 16.074106254s Jun 1 13:04:13.669: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 18.078552789s Jun 1 13:04:15.673: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 20.083118383s Jun 1 13:04:17.677: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Running", Reason="", readiness=true. Elapsed: 22.087514402s Jun 1 13:04:19.708: INFO: Pod "pod-subpath-test-projected-s4z5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118036616s STEP: Saw pod success Jun 1 13:04:19.708: INFO: Pod "pod-subpath-test-projected-s4z5" satisfied condition "success or failure" Jun 1 13:04:19.711: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-s4z5 container test-container-subpath-projected-s4z5: STEP: delete the pod Jun 1 13:04:19.729: INFO: Waiting for pod pod-subpath-test-projected-s4z5 to disappear Jun 1 13:04:19.733: INFO: Pod pod-subpath-test-projected-s4z5 no longer exists STEP: Deleting pod pod-subpath-test-projected-s4z5 Jun 1 13:04:19.733: INFO: Deleting pod "pod-subpath-test-projected-s4z5" in namespace "subpath-7044" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:04:19.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7044" for this suite. Jun 1 13:04:25.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:04:25.856: INFO: namespace subpath-7044 deletion completed in 6.116974907s • [SLOW TEST:30.388 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:04:25.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a114021a-a9de-4d22-9d9f-616fa14dac8d STEP: Creating secret with name s-test-opt-upd-702b9ebb-efee-4b46-959d-1dacf2e8dde5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a114021a-a9de-4d22-9d9f-616fa14dac8d STEP: Updating secret s-test-opt-upd-702b9ebb-efee-4b46-959d-1dacf2e8dde5 STEP: Creating secret with name s-test-opt-create-79153f95-e7ce-4e80-a4b0-927ccbbd517d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:04:36.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1038" for this suite. Jun 1 13:05:00.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:05:01.081: INFO: namespace projected-1038 deletion completed in 24.941076873s • [SLOW TEST:35.226 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:05:01.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:05:05.324: INFO: Waiting up to 5m0s for pod "client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6" in namespace "pods-5834" to be "success or failure" Jun 1 13:05:05.335: INFO: Pod "client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.401434ms Jun 1 13:05:07.595: INFO: Pod "client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271270129s Jun 1 13:05:09.599: INFO: Pod "client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.275061554s STEP: Saw pod success Jun 1 13:05:09.599: INFO: Pod "client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6" satisfied condition "success or failure" Jun 1 13:05:09.602: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6 container env3cont: STEP: delete the pod Jun 1 13:05:09.651: INFO: Waiting for pod client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6 to disappear Jun 1 13:05:09.668: INFO: Pod client-envvars-d528f649-ba45-47eb-aed2-021e974ab4e6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:05:09.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5834" for this suite. Jun 1 13:05:55.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:05:55.774: INFO: namespace pods-5834 deletion completed in 46.103554813s • [SLOW TEST:54.692 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:05:55.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 1 13:05:55.881: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:05:55.885: INFO: Number of nodes with available pods: 0 Jun 1 13:05:55.885: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:05:56.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:05:56.891: INFO: Number of nodes with available pods: 0 Jun 1 13:05:56.891: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:05:58.009: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:05:58.055: INFO: Number of nodes with available pods: 0 Jun 1 13:05:58.055: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:06:00.156: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:06:00.160: INFO: Number of nodes with available pods: 0 Jun 1 13:06:00.160: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:06:00.994: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:06:00.998: INFO: Number of nodes with available pods: 0 Jun 1 13:06:00.998: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:06:01.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:06:01.893: INFO: Number of nodes with available pods: 0 Jun 1 13:06:01.893: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:06:02.915: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:06:02.919: INFO: Number of nodes with available pods: 2 Jun 1 13:06:02.919: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 1 13:06:02.976: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:06:02.987: INFO: Number of nodes with available pods: 2 Jun 1 13:06:02.987: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4628, will wait for the garbage collector to delete the pods Jun 1 13:06:04.085: INFO: Deleting DaemonSet.extensions daemon-set took: 6.641826ms Jun 1 13:06:04.385: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263875ms Jun 1 13:06:06.989: INFO: Number of nodes with available pods: 0 Jun 1 13:06:06.989: INFO: Number of running nodes: 0, number of available pods: 0 Jun 1 13:06:06.991: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4628/daemonsets","resourceVersion":"14079870"},"items":null} Jun 1 13:06:06.993: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4628/pods","resourceVersion":"14079870"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:06:07.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4628" for this suite. Jun 1 13:06:13.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:06:13.116: INFO: namespace daemonsets-4628 deletion completed in 6.110241839s • [SLOW TEST:17.341 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:06:13.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 13:06:13.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6040' Jun 1 13:06:15.851: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 1 13:06:15.851: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 1 13:06:17.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6040' Jun 1 13:06:18.004: INFO: stderr: "" Jun 1 13:06:18.004: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:06:18.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6040" for this suite. Jun 1 13:06:24.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:06:24.100: INFO: namespace kubectl-6040 deletion completed in 6.092509502s • [SLOW TEST:10.984 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:06:24.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 1 13:06:24.183: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5356,SelfLink:/api/v1/namespaces/watch-5356/configmaps/e2e-watch-test-watch-closed,UID:ba92fb67-3623-41bf-ac4d-94fdbe313f1a,ResourceVersion:14079964,Generation:0,CreationTimestamp:2020-06-01 13:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 1 13:06:24.183: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5356,SelfLink:/api/v1/namespaces/watch-5356/configmaps/e2e-watch-test-watch-closed,UID:ba92fb67-3623-41bf-ac4d-94fdbe313f1a,ResourceVersion:14079965,Generation:0,CreationTimestamp:2020-06-01 13:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 1 13:06:24.192: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5356,SelfLink:/api/v1/namespaces/watch-5356/configmaps/e2e-watch-test-watch-closed,UID:ba92fb67-3623-41bf-ac4d-94fdbe313f1a,ResourceVersion:14079966,Generation:0,CreationTimestamp:2020-06-01 13:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 1 13:06:24.192: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5356,SelfLink:/api/v1/namespaces/watch-5356/configmaps/e2e-watch-test-watch-closed,UID:ba92fb67-3623-41bf-ac4d-94fdbe313f1a,ResourceVersion:14079967,Generation:0,CreationTimestamp:2020-06-01 13:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:06:24.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5356" for this suite. Jun 1 13:06:30.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:06:30.271: INFO: namespace watch-5356 deletion completed in 6.073785779s • [SLOW TEST:6.171 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:06:30.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 1 13:06:30.367: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 1 13:06:30.947: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 1 13:06:33.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:06:35.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726613590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:06:38.121: INFO: Waited 626.858278ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:06:38.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6972" for this suite. Jun 1 13:06:44.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:06:45.028: INFO: namespace aggregator-6972 deletion completed in 6.329010617s • [SLOW TEST:14.756 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:06:45.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4137/configmap-test-6a0e6942-c22e-4d37-9096-6def0ae656b8 STEP: Creating a pod to test consume configMaps Jun 1 13:06:45.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f" in namespace "configmap-4137" to be "success or failure" Jun 1 13:06:45.156: INFO: Pod "pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.553406ms Jun 1 13:06:47.159: INFO: Pod "pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023469559s Jun 1 13:06:49.164: INFO: Pod "pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028693621s STEP: Saw pod success Jun 1 13:06:49.164: INFO: Pod "pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f" satisfied condition "success or failure" Jun 1 13:06:49.167: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f container env-test: STEP: delete the pod Jun 1 13:06:49.408: INFO: Waiting for pod pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f to disappear Jun 1 13:06:49.480: INFO: Pod pod-configmaps-9a024654-5987-4f89-8f69-f8caa260716f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:06:49.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4137" for this suite. Jun 1 13:06:55.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:06:55.770: INFO: namespace configmap-4137 deletion completed in 6.28753031s • [SLOW TEST:10.741 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:06:55.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 1 13:07:01.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-306f4c37-7949-42aa-add9-384622023001 -c busybox-main-container --namespace=emptydir-4314 -- cat /usr/share/volumeshare/shareddata.txt' Jun 1 13:07:02.148: INFO: stderr: "I0601 13:07:02.045401 894 log.go:172] (0xc000926420) (0xc0006a8a00) Create stream\nI0601 13:07:02.045468 894 log.go:172] (0xc000926420) (0xc0006a8a00) Stream added, broadcasting: 1\nI0601 13:07:02.048432 894 log.go:172] (0xc000926420) Reply frame received for 1\nI0601 13:07:02.048483 894 log.go:172] (0xc000926420) (0xc000680000) Create stream\nI0601 13:07:02.048502 894 log.go:172] (0xc000926420) (0xc000680000) Stream added, broadcasting: 3\nI0601 13:07:02.049614 894 log.go:172] (0xc000926420) Reply frame received for 3\nI0601 13:07:02.049659 894 log.go:172] (0xc000926420) (0xc0006a8aa0) Create stream\nI0601 13:07:02.049676 894 log.go:172] (0xc000926420) (0xc0006a8aa0) Stream added, broadcasting: 5\nI0601 13:07:02.050488 894 log.go:172] (0xc000926420) Reply frame received for 5\nI0601 13:07:02.142598 894 log.go:172] (0xc000926420) Data frame received for 3\nI0601 13:07:02.142668 894 log.go:172] (0xc000680000) (3) Data frame handling\nI0601 13:07:02.142695 894 log.go:172] (0xc000680000) (3) Data frame sent\nI0601 13:07:02.142714 894 log.go:172] (0xc000926420) Data frame received for 3\nI0601 13:07:02.142730 894 log.go:172] (0xc000680000) (3) Data frame handling\nI0601 13:07:02.142781 894 log.go:172] (0xc000926420) Data frame received for 5\nI0601 13:07:02.142819 894 log.go:172] (0xc0006a8aa0) (5) Data frame handling\nI0601 13:07:02.144503 894 log.go:172] (0xc000926420) Data frame received for 1\nI0601 13:07:02.144524 894 log.go:172] (0xc0006a8a00) (1) Data frame handling\nI0601 13:07:02.144537 894 log.go:172] (0xc0006a8a00) (1) Data frame sent\nI0601 13:07:02.144656 894 log.go:172] (0xc000926420) (0xc0006a8a00) Stream removed, broadcasting: 1\nI0601 13:07:02.144896 894 log.go:172] (0xc000926420) Go away received\nI0601 13:07:02.145061 894 log.go:172] (0xc000926420) (0xc0006a8a00) Stream removed, broadcasting: 1\nI0601 13:07:02.145085 894 log.go:172] (0xc000926420) (0xc000680000) Stream removed, broadcasting: 3\nI0601 13:07:02.145103 894 log.go:172] (0xc000926420) (0xc0006a8aa0) Stream removed, broadcasting: 5\n" Jun 1 13:07:02.148: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:07:02.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4314" for this suite. Jun 1 13:07:08.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:07:08.264: INFO: namespace emptydir-4314 deletion completed in 6.111650043s • [SLOW TEST:12.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:07:08.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 1 13:07:12.866: INFO: Successfully updated pod "labelsupdatec8cff0e8-7209-42e0-b909-4816fa04c3fe" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:07:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9330" for this suite. Jun 1 13:07:36.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:07:37.083: INFO: namespace projected-9330 deletion completed in 22.140914409s • [SLOW TEST:28.818 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:07:37.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 1 13:07:37.140: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:07:51.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2166" for this suite. Jun 1 13:07:57.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:07:57.997: INFO: namespace pods-2166 deletion completed in 6.100678535s • [SLOW TEST:20.914 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:07:57.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 1 13:07:58.088: INFO: Waiting up to 5m0s for pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0" in namespace "emptydir-5552" to be "success or failure" Jun 1 13:07:58.116: INFO: Pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.200886ms Jun 1 13:08:00.120: INFO: Pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032278812s Jun 1 13:08:02.125: INFO: Pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.037083541s Jun 1 13:08:04.129: INFO: Pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041531559s STEP: Saw pod success Jun 1 13:08:04.130: INFO: Pod "pod-a37b470b-8a48-43e1-9cff-6692c09365d0" satisfied condition "success or failure" Jun 1 13:08:04.133: INFO: Trying to get logs from node iruya-worker pod pod-a37b470b-8a48-43e1-9cff-6692c09365d0 container test-container: STEP: delete the pod Jun 1 13:08:04.207: INFO: Waiting for pod pod-a37b470b-8a48-43e1-9cff-6692c09365d0 to disappear Jun 1 13:08:04.211: INFO: Pod pod-a37b470b-8a48-43e1-9cff-6692c09365d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:08:04.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5552" for this suite. Jun 1 13:08:10.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:08:10.333: INFO: namespace emptydir-5552 deletion completed in 6.119205937s • [SLOW TEST:12.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:08:10.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:08:10.379: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:08:16.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3110" for this suite. Jun 1 13:09:06.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:09:06.601: INFO: namespace pods-3110 deletion completed in 50.138748624s • [SLOW TEST:56.267 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:09:06.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-eb3074b7-1a89-4444-9a5d-72889da32eab STEP: Creating a pod to test consume secrets Jun 1 13:09:06.676: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6" in namespace "projected-4998" to be "success or failure" Jun 1 13:09:06.687: INFO: Pod "pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.125191ms Jun 1 13:09:08.690: INFO: Pod "pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014684056s Jun 1 13:09:10.695: INFO: Pod "pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019278185s STEP: Saw pod success Jun 1 13:09:10.695: INFO: Pod "pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6" satisfied condition "success or failure" Jun 1 13:09:10.698: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6 container secret-volume-test: STEP: delete the pod Jun 1 13:09:10.736: INFO: Waiting for pod pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6 to disappear Jun 1 13:09:10.751: INFO: Pod pod-projected-secrets-c99bceb0-d94b-461c-9a7f-a1c84e45b3b6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:09:10.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4998" for this suite. Jun 1 13:09:16.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:09:16.851: INFO: namespace projected-4998 deletion completed in 6.095636346s • [SLOW TEST:10.249 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:09:16.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 1 13:09:16.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6465 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 1 13:09:20.879: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0601 13:09:20.802111 916 log.go:172] (0xc000a4a160) (0xc0005de320) Create stream\nI0601 13:09:20.802194 916 log.go:172] (0xc000a4a160) (0xc0005de320) Stream added, broadcasting: 1\nI0601 13:09:20.804383 916 log.go:172] (0xc000a4a160) Reply frame received for 1\nI0601 13:09:20.804427 916 log.go:172] (0xc000a4a160) (0xc0003ac000) Create stream\nI0601 13:09:20.804436 916 log.go:172] (0xc000a4a160) (0xc0003ac000) Stream added, broadcasting: 3\nI0601 13:09:20.805465 916 log.go:172] (0xc000a4a160) Reply frame received for 3\nI0601 13:09:20.805497 916 log.go:172] (0xc000a4a160) (0xc0005de3c0) Create stream\nI0601 13:09:20.805503 916 log.go:172] (0xc000a4a160) (0xc0005de3c0) Stream added, broadcasting: 5\nI0601 13:09:20.806321 916 log.go:172] (0xc000a4a160) Reply frame received for 5\nI0601 13:09:20.806353 916 log.go:172] (0xc000a4a160) (0xc0005de460) Create stream\nI0601 13:09:20.806366 916 log.go:172] (0xc000a4a160) (0xc0005de460) Stream added, broadcasting: 7\nI0601 13:09:20.807083 916 log.go:172] (0xc000a4a160) Reply frame received for 7\nI0601 13:09:20.807250 916 log.go:172] (0xc0003ac000) (3) Writing data frame\nI0601 13:09:20.807382 916 log.go:172] (0xc0003ac000) (3) Writing data frame\nI0601 13:09:20.808584 916 log.go:172] (0xc000a4a160) Data frame received for 5\nI0601 13:09:20.808611 916 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0601 13:09:20.808629 916 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0601 13:09:20.809382 916 log.go:172] (0xc000a4a160) Data frame received for 5\nI0601 13:09:20.809406 916 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0601 13:09:20.809427 916 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0601 13:09:20.850407 916 log.go:172] (0xc000a4a160) Data frame received for 5\nI0601 13:09:20.850445 916 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0601 13:09:20.850657 916 log.go:172] (0xc000a4a160) Data frame received for 7\nI0601 13:09:20.850698 916 log.go:172] (0xc0005de460) (7) Data frame handling\nI0601 13:09:20.851016 916 log.go:172] (0xc000a4a160) Data frame received for 1\nI0601 13:09:20.851050 916 log.go:172] (0xc0005de320) (1) Data frame handling\nI0601 13:09:20.851091 916 log.go:172] (0xc0005de320) (1) Data frame sent\nI0601 13:09:20.851126 916 log.go:172] (0xc000a4a160) (0xc0005de320) Stream removed, broadcasting: 1\nI0601 13:09:20.851174 916 log.go:172] (0xc000a4a160) (0xc0003ac000) Stream removed, broadcasting: 3\nI0601 13:09:20.851241 916 log.go:172] (0xc000a4a160) Go away received\nI0601 13:09:20.851294 916 log.go:172] (0xc000a4a160) (0xc0005de320) Stream removed, broadcasting: 1\nI0601 13:09:20.851336 916 log.go:172] (0xc000a4a160) (0xc0003ac000) Stream removed, broadcasting: 3\nI0601 13:09:20.851366 916 log.go:172] (0xc000a4a160) (0xc0005de3c0) Stream removed, broadcasting: 5\nI0601 13:09:20.851384 916 log.go:172] (0xc000a4a160) (0xc0005de460) Stream removed, broadcasting: 7\n" Jun 1 13:09:20.879: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:09:22.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6465" for this suite. Jun 1 13:09:32.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:09:33.026: INFO: namespace kubectl-6465 deletion completed in 10.134675812s • [SLOW TEST:16.174 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:09:33.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-3e831d08-9049-470d-a297-3511aaa47294 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:09:33.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4956" for this suite. Jun 1 13:09:39.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:09:39.204: INFO: namespace secrets-4956 deletion completed in 6.086069123s • [SLOW TEST:6.178 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:09:39.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-45e4434d-3d39-43c6-bcde-359afd63dde2 STEP: Creating secret with name s-test-opt-upd-64587d7f-00ee-4b6f-89db-df3f70143514 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-45e4434d-3d39-43c6-bcde-359afd63dde2 STEP: Updating secret s-test-opt-upd-64587d7f-00ee-4b6f-89db-df3f70143514 STEP: Creating secret with name s-test-opt-create-52574f7a-a668-4c64-85c7-c417d3c34fc0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:10:50.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7624" for this suite. Jun 1 13:11:12.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:11:12.301: INFO: namespace secrets-7624 deletion completed in 22.120133705s • [SLOW TEST:93.096 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:11:12.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 1 13:11:16.370: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-8bc6a5f4-7873-476a-9fbc-966c5f4c690c,GenerateName:,Namespace:events-6997,SelfLink:/api/v1/namespaces/events-6997/pods/send-events-8bc6a5f4-7873-476a-9fbc-966c5f4c690c,UID:947d22e3-07d1-49c0-ab16-a52e3fe1bc29,ResourceVersion:14080888,Generation:0,CreationTimestamp:2020-06-01 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 348019196,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qffhh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qffhh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qffhh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002627f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002627f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:11:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:11:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.90,StartTime:2020-06-01 13:11:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-01 13:11:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://8776e72b3edf5236447eb64de31cb408a677df9d0973396c544c9bf0b08515a7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 1 13:11:18.379: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 1 13:11:20.384: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:11:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6997" for this suite. Jun 1 13:12:08.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:12:08.500: INFO: namespace events-6997 deletion completed in 48.097603628s • [SLOW TEST:56.199 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:12:08.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 13:12:08.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1151' Jun 1 13:12:08.679: INFO: stderr: "" Jun 1 13:12:08.679: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 1 13:12:08.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1151' Jun 1 13:12:22.189: INFO: stderr: "" Jun 1 13:12:22.189: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:12:22.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1151" for this suite. Jun 1 13:12:28.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:12:28.304: INFO: namespace kubectl-1151 deletion completed in 6.105173886s • [SLOW TEST:19.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:12:28.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:12:28.486: INFO: Create a RollingUpdate DaemonSet Jun 1 13:12:28.490: INFO: Check that daemon pods launch on every node of the cluster Jun 1 13:12:28.493: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:28.497: INFO: Number of nodes with available pods: 0 Jun 1 13:12:28.497: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:12:29.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:29.506: INFO: Number of nodes with available pods: 0 Jun 1 13:12:29.506: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:12:30.549: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:30.552: INFO: Number of nodes with available pods: 0 Jun 1 13:12:30.552: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:12:31.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:31.505: INFO: Number of nodes with available pods: 0 Jun 1 13:12:31.505: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:12:32.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:32.507: INFO: Number of nodes with available pods: 0 Jun 1 13:12:32.507: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:12:33.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:33.507: INFO: Number of nodes with available pods: 2 Jun 1 13:12:33.507: INFO: Number of running nodes: 2, number of available pods: 2 Jun 1 13:12:33.507: INFO: Update the DaemonSet to trigger a rollout Jun 1 13:12:33.515: INFO: Updating DaemonSet daemon-set Jun 1 13:12:38.534: INFO: Roll back the DaemonSet before rollout is complete Jun 1 13:12:38.542: INFO: Updating DaemonSet daemon-set Jun 1 13:12:38.542: INFO: Make sure DaemonSet rollback is complete Jun 1 13:12:38.572: INFO: Wrong image for pod: daemon-set-kp64h. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 1 13:12:38.572: INFO: Pod daemon-set-kp64h is not available Jun 1 13:12:38.576: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:39.580: INFO: Wrong image for pod: daemon-set-kp64h. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 1 13:12:39.580: INFO: Pod daemon-set-kp64h is not available Jun 1 13:12:39.583: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:12:40.589: INFO: Pod daemon-set-dbf6l is not available Jun 1 13:12:40.594: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7470, will wait for the garbage collector to delete the pods Jun 1 13:12:40.688: INFO: Deleting DaemonSet.extensions daemon-set took: 18.092168ms Jun 1 13:12:40.989: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.211151ms Jun 1 13:12:52.192: INFO: Number of nodes with available pods: 0 Jun 1 13:12:52.192: INFO: Number of running nodes: 0, number of available pods: 0 Jun 1 13:12:52.195: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7470/daemonsets","resourceVersion":"14081189"},"items":null} Jun 1 13:12:52.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7470/pods","resourceVersion":"14081189"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:12:52.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7470" for this suite. Jun 1 13:12:58.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:12:58.348: INFO: namespace daemonsets-7470 deletion completed in 6.13703737s • [SLOW TEST:30.044 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:12:58.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:12:58.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389" in namespace "projected-510" to be "success or failure" Jun 1 13:12:58.419: INFO: Pod "downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389": Phase="Pending", Reason="", readiness=false. Elapsed: 14.545702ms Jun 1 13:13:00.424: INFO: Pod "downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019054087s Jun 1 13:13:02.428: INFO: Pod "downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023613905s STEP: Saw pod success Jun 1 13:13:02.428: INFO: Pod "downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389" satisfied condition "success or failure" Jun 1 13:13:02.431: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389 container client-container: STEP: delete the pod Jun 1 13:13:02.580: INFO: Waiting for pod downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389 to disappear Jun 1 13:13:02.648: INFO: Pod downwardapi-volume-5c47b486-4adf-44b4-b09c-a0447bdc0389 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:13:02.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-510" for this suite. Jun 1 13:13:08.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:13:08.784: INFO: namespace projected-510 deletion completed in 6.132344093s • [SLOW TEST:10.435 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:13:08.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-594 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 1 13:13:08.956: INFO: Found 0 stateful pods, waiting for 3 Jun 1 13:13:18.969: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:13:18.969: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:13:18.969: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 1 13:13:28.960: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:13:28.960: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:13:28.960: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 1 13:13:28.988: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 1 13:13:39.072: INFO: Updating stateful set ss2 Jun 1 13:13:39.087: INFO: Waiting for Pod statefulset-594/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:13:49.096: INFO: Waiting for Pod statefulset-594/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 1 13:13:59.383: INFO: Found 2 stateful pods, waiting for 3 Jun 1 13:14:09.389: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:14:09.389: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:14:09.389: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 1 13:14:09.413: INFO: Updating stateful set ss2 Jun 1 13:14:09.467: INFO: Waiting for Pod statefulset-594/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:14:19.492: INFO: Updating stateful set ss2 Jun 1 13:14:19.507: INFO: Waiting for StatefulSet statefulset-594/ss2 to complete update Jun 1 13:14:19.507: INFO: Waiting for Pod statefulset-594/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:14:29.516: INFO: Waiting for StatefulSet statefulset-594/ss2 to complete update Jun 1 13:14:29.516: INFO: Waiting for Pod statefulset-594/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 1 13:14:39.514: INFO: Deleting all statefulset in ns statefulset-594 Jun 1 13:14:39.516: INFO: Scaling statefulset ss2 to 0 Jun 1 13:14:59.556: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 13:14:59.560: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:14:59.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-594" for this suite. Jun 1 13:15:05.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:15:05.706: INFO: namespace statefulset-594 deletion completed in 6.102159426s • [SLOW TEST:116.921 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:15:05.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:15:05.803: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.788194ms) Jun 1 13:15:05.806: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.774278ms) Jun 1 13:15:05.810: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.625947ms) Jun 1 13:15:05.814: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.777903ms) Jun 1 13:15:05.818: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.771916ms) Jun 1 13:15:05.821: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.726159ms) Jun 1 13:15:05.824: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.969887ms) Jun 1 13:15:05.827: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.857ms) Jun 1 13:15:05.830: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.052937ms) Jun 1 13:15:05.833: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.835559ms) Jun 1 13:15:05.862: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 28.652926ms) Jun 1 13:15:05.873: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 11.254592ms) Jun 1 13:15:05.876: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.736138ms) Jun 1 13:15:05.879: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.862665ms) Jun 1 13:15:05.882: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.73839ms) Jun 1 13:15:05.885: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.095202ms) Jun 1 13:15:05.888: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.944784ms) Jun 1 13:15:05.891: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.875933ms) Jun 1 13:15:05.894: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.952492ms) Jun 1 13:15:05.897: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.248926ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:15:05.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1753" for this suite. Jun 1 13:15:11.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:15:12.046: INFO: namespace proxy-1753 deletion completed in 6.14577758s • [SLOW TEST:6.340 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:15:12.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 1 13:15:12.211: INFO: Waiting up to 5m0s for pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793" in namespace "var-expansion-9567" to be "success or failure" Jun 1 13:15:12.222: INFO: Pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793": Phase="Pending", Reason="", readiness=false. Elapsed: 11.896595ms Jun 1 13:15:14.962: INFO: Pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75125254s Jun 1 13:15:18.551: INFO: Pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340251133s Jun 1 13:15:20.556: INFO: Pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.345078576s STEP: Saw pod success Jun 1 13:15:20.556: INFO: Pod "var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793" satisfied condition "success or failure" Jun 1 13:15:20.559: INFO: Trying to get logs from node iruya-worker pod var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793 container dapi-container: STEP: delete the pod Jun 1 13:15:20.625: INFO: Waiting for pod var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793 to disappear Jun 1 13:15:20.678: INFO: Pod var-expansion-7f1143a1-bcf9-42f8-8d8c-5c66b0e4f793 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:15:20.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9567" for this suite. Jun 1 13:15:26.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:15:26.850: INFO: namespace var-expansion-9567 deletion completed in 6.168298231s • [SLOW TEST:14.803 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:15:26.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-643c07c9-0d4c-4365-81de-e40b75f9f4b3 STEP: Creating a pod to test consume configMaps Jun 1 13:15:26.936: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d" in namespace "projected-8770" to be "success or failure" Jun 1 13:15:26.948: INFO: Pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.653103ms Jun 1 13:15:29.204: INFO: Pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268329024s Jun 1 13:15:31.209: INFO: Pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273267168s Jun 1 13:15:33.215: INFO: Pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279365072s STEP: Saw pod success Jun 1 13:15:33.215: INFO: Pod "pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d" satisfied condition "success or failure" Jun 1 13:15:33.218: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d container projected-configmap-volume-test: STEP: delete the pod Jun 1 13:15:33.261: INFO: Waiting for pod pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d to disappear Jun 1 13:15:33.265: INFO: Pod pod-projected-configmaps-f6b195a8-2611-434b-977b-df9639dc053d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:15:33.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8770" for this suite. Jun 1 13:15:39.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:15:39.388: INFO: namespace projected-8770 deletion completed in 6.11997537s • [SLOW TEST:12.538 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:15:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7379/configmap-test-0993843f-8b00-4499-8b38-7f5a584022d0 STEP: Creating a pod to test consume configMaps Jun 1 13:15:39.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac" in namespace "configmap-7379" to be "success or failure" Jun 1 13:15:39.519: INFO: Pod "pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031536ms Jun 1 13:15:41.550: INFO: Pod "pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034838273s Jun 1 13:15:43.575: INFO: Pod "pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059294897s STEP: Saw pod success Jun 1 13:15:43.575: INFO: Pod "pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac" satisfied condition "success or failure" Jun 1 13:15:43.628: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac container env-test: STEP: delete the pod Jun 1 13:15:43.732: INFO: Waiting for pod pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac to disappear Jun 1 13:15:43.742: INFO: Pod pod-configmaps-283930b1-4b1b-4d61-83eb-0b3f5df7f5ac no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:15:43.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7379" for this suite. Jun 1 13:15:49.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:15:49.847: INFO: namespace configmap-7379 deletion completed in 6.102584331s • [SLOW TEST:10.459 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:15:49.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f24005f9-ada3-4bd9-bcc6-521d7cda7d51 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f24005f9-ada3-4bd9-bcc6-521d7cda7d51 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:15:56.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9036" for this suite. Jun 1 13:16:18.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:16:18.154: INFO: namespace configmap-9036 deletion completed in 22.102388213s • [SLOW TEST:28.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:16:18.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c06db56e-307d-42ed-85de-3e26bb9cfbfc STEP: Creating a pod to test consume secrets Jun 1 13:16:18.244: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc" in namespace "projected-1289" to be "success or failure" Jun 1 13:16:18.248: INFO: Pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937899ms Jun 1 13:16:20.253: INFO: Pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008928441s Jun 1 13:16:22.258: INFO: Pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc": Phase="Running", Reason="", readiness=true. Elapsed: 4.013416225s Jun 1 13:16:24.263: INFO: Pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018573572s STEP: Saw pod success Jun 1 13:16:24.263: INFO: Pod "pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc" satisfied condition "success or failure" Jun 1 13:16:24.266: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc container projected-secret-volume-test: STEP: delete the pod Jun 1 13:16:24.288: INFO: Waiting for pod pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc to disappear Jun 1 13:16:24.292: INFO: Pod pod-projected-secrets-1ac05312-a608-4de2-a96d-b4b6944ea5bc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:16:24.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1289" for this suite. Jun 1 13:16:30.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:16:30.443: INFO: namespace projected-1289 deletion completed in 6.147790419s • [SLOW TEST:12.288 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:16:30.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 1 13:16:30.492: INFO: Waiting up to 5m0s for pod "pod-8d466c92-debc-4176-b0e4-153c7f1f9ace" in namespace "emptydir-4240" to be "success or failure" Jun 1 13:16:30.511: INFO: Pod "pod-8d466c92-debc-4176-b0e4-153c7f1f9ace": Phase="Pending", Reason="", readiness=false. Elapsed: 18.974981ms Jun 1 13:16:32.515: INFO: Pod "pod-8d466c92-debc-4176-b0e4-153c7f1f9ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022879471s Jun 1 13:16:34.519: INFO: Pod "pod-8d466c92-debc-4176-b0e4-153c7f1f9ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027315284s STEP: Saw pod success Jun 1 13:16:34.520: INFO: Pod "pod-8d466c92-debc-4176-b0e4-153c7f1f9ace" satisfied condition "success or failure" Jun 1 13:16:34.523: INFO: Trying to get logs from node iruya-worker pod pod-8d466c92-debc-4176-b0e4-153c7f1f9ace container test-container: STEP: delete the pod Jun 1 13:16:34.546: INFO: Waiting for pod pod-8d466c92-debc-4176-b0e4-153c7f1f9ace to disappear Jun 1 13:16:34.550: INFO: Pod pod-8d466c92-debc-4176-b0e4-153c7f1f9ace no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:16:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4240" for this suite. Jun 1 13:16:40.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:16:40.651: INFO: namespace emptydir-4240 deletion completed in 6.098066704s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:16:40.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:16:45.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1459" for this suite. Jun 1 13:17:07.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:17:07.925: INFO: namespace replication-controller-1459 deletion completed in 22.117609859s • [SLOW TEST:27.273 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:17:07.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e8292b3e-a9cc-486d-a331-4335c3dfbf6b STEP: Creating a pod to test consume configMaps Jun 1 13:17:08.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f" in namespace "configmap-8555" to be "success or failure" Jun 1 13:17:08.145: INFO: Pod "pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f": Phase="Pending", Reason="", readiness=false. Elapsed: 72.659108ms Jun 1 13:17:10.241: INFO: Pod "pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168721749s Jun 1 13:17:12.244: INFO: Pod "pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171520634s STEP: Saw pod success Jun 1 13:17:12.244: INFO: Pod "pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f" satisfied condition "success or failure" Jun 1 13:17:12.246: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f container configmap-volume-test: STEP: delete the pod Jun 1 13:17:12.259: INFO: Waiting for pod pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f to disappear Jun 1 13:17:12.279: INFO: Pod pod-configmaps-266626d4-1a8f-4ba9-a602-9727368b998f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:17:12.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8555" for this suite. Jun 1 13:17:18.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:17:18.381: INFO: namespace configmap-8555 deletion completed in 6.098547607s • [SLOW TEST:10.455 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:17:18.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 1 13:17:18.456: INFO: namespace kubectl-9438 Jun 1 13:17:18.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9438' Jun 1 13:17:23.635: INFO: stderr: "" Jun 1 13:17:23.635: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 1 13:17:24.639: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:17:24.639: INFO: Found 0 / 1 Jun 1 13:17:25.666: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:17:25.666: INFO: Found 0 / 1 Jun 1 13:17:26.640: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:17:26.640: INFO: Found 0 / 1 Jun 1 13:17:27.639: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:17:27.639: INFO: Found 1 / 1 Jun 1 13:17:27.639: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 1 13:17:27.696: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:17:27.696: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 1 13:17:27.696: INFO: wait on redis-master startup in kubectl-9438 Jun 1 13:17:27.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cwh4j redis-master --namespace=kubectl-9438' Jun 1 13:17:27.800: INFO: stderr: "" Jun 1 13:17:27.801: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jun 13:17:26.472 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jun 13:17:26.472 # Server started, Redis version 3.2.12\n1:M 01 Jun 13:17:26.472 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jun 13:17:26.472 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 1 13:17:27.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9438' Jun 1 13:17:27.938: INFO: stderr: "" Jun 1 13:17:27.938: INFO: stdout: "service/rm2 exposed\n" Jun 1 13:17:27.946: INFO: Service rm2 in namespace kubectl-9438 found. STEP: exposing service Jun 1 13:17:29.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9438' Jun 1 13:17:30.100: INFO: stderr: "" Jun 1 13:17:30.100: INFO: stdout: "service/rm3 exposed\n" Jun 1 13:17:30.176: INFO: Service rm3 in namespace kubectl-9438 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:17:32.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9438" for this suite. Jun 1 13:17:54.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:17:54.360: INFO: namespace kubectl-9438 deletion completed in 22.145249367s • [SLOW TEST:35.979 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:17:54.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:17:54.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5955" for this suite. Jun 1 13:18:00.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:18:00.651: INFO: namespace kubelet-test-5955 deletion completed in 6.110336612s • [SLOW TEST:6.290 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:18:00.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c85678ba-5396-48cc-8b0c-73a8bf779407 STEP: Creating a pod to test consume secrets Jun 1 13:18:00.788: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54" in namespace "projected-2831" to be "success or failure" Jun 1 13:18:00.792: INFO: Pod "pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.404743ms Jun 1 13:18:02.796: INFO: Pod "pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007055749s Jun 1 13:18:04.799: INFO: Pod "pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010299631s STEP: Saw pod success Jun 1 13:18:04.799: INFO: Pod "pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54" satisfied condition "success or failure" Jun 1 13:18:04.801: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54 container projected-secret-volume-test: STEP: delete the pod Jun 1 13:18:04.926: INFO: Waiting for pod pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54 to disappear Jun 1 13:18:05.011: INFO: Pod pod-projected-secrets-69370195-b7aa-43c0-9c62-b355875a8c54 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:18:05.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2831" for this suite. Jun 1 13:18:11.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:18:11.139: INFO: namespace projected-2831 deletion completed in 6.122676683s • [SLOW TEST:10.488 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:18:11.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lswk STEP: Creating a pod to test atomic-volume-subpath Jun 1 13:18:11.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lswk" in namespace "subpath-393" to be "success or failure" Jun 1 13:18:11.278: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423105ms Jun 1 13:18:13.282: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007203442s Jun 1 13:18:15.286: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 4.011049061s Jun 1 13:18:17.291: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 6.015707507s Jun 1 13:18:19.295: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 8.019621302s Jun 1 13:18:21.298: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 10.023413573s Jun 1 13:18:23.303: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 12.027635001s Jun 1 13:18:25.307: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 14.032231335s Jun 1 13:18:27.311: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 16.036118528s Jun 1 13:18:29.315: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 18.040402286s Jun 1 13:18:31.320: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 20.044772509s Jun 1 13:18:33.323: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Running", Reason="", readiness=true. Elapsed: 22.048497323s Jun 1 13:18:35.327: INFO: Pod "pod-subpath-test-configmap-lswk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052265728s STEP: Saw pod success Jun 1 13:18:35.327: INFO: Pod "pod-subpath-test-configmap-lswk" satisfied condition "success or failure" Jun 1 13:18:35.330: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-lswk container test-container-subpath-configmap-lswk: STEP: delete the pod Jun 1 13:18:35.351: INFO: Waiting for pod pod-subpath-test-configmap-lswk to disappear Jun 1 13:18:35.418: INFO: Pod pod-subpath-test-configmap-lswk no longer exists STEP: Deleting pod pod-subpath-test-configmap-lswk Jun 1 13:18:35.418: INFO: Deleting pod "pod-subpath-test-configmap-lswk" in namespace "subpath-393" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:18:35.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-393" for this suite. Jun 1 13:18:41.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:18:41.549: INFO: namespace subpath-393 deletion completed in 6.124125296s • [SLOW TEST:30.409 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:18:41.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-2f0da006-cade-4641-9f04-a3241997f101 STEP: Creating a pod to test consume secrets Jun 1 13:18:41.698: INFO: Waiting up to 5m0s for pod "pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614" in namespace "secrets-9863" to be "success or failure" Jun 1 13:18:41.715: INFO: Pod "pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614": Phase="Pending", Reason="", readiness=false. Elapsed: 16.456321ms Jun 1 13:18:43.719: INFO: Pod "pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020945574s Jun 1 13:18:45.723: INFO: Pod "pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024393622s STEP: Saw pod success Jun 1 13:18:45.723: INFO: Pod "pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614" satisfied condition "success or failure" Jun 1 13:18:45.725: INFO: Trying to get logs from node iruya-worker pod pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614 container secret-volume-test: STEP: delete the pod Jun 1 13:18:45.746: INFO: Waiting for pod pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614 to disappear Jun 1 13:18:45.762: INFO: Pod pod-secrets-78e9b190-3dc2-4b0f-9a30-01e8ee969614 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:18:45.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9863" for this suite. Jun 1 13:18:51.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:18:52.041: INFO: namespace secrets-9863 deletion completed in 6.275149275s • [SLOW TEST:10.491 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:18:52.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 1 13:18:52.167: INFO: Waiting up to 5m0s for pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed" in namespace "emptydir-8277" to be "success or failure" Jun 1 13:18:52.204: INFO: Pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 37.256038ms Jun 1 13:18:54.219: INFO: Pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052157909s Jun 1 13:18:56.272: INFO: Pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104839212s Jun 1 13:18:58.276: INFO: Pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108628934s STEP: Saw pod success Jun 1 13:18:58.276: INFO: Pod "pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed" satisfied condition "success or failure" Jun 1 13:18:58.278: INFO: Trying to get logs from node iruya-worker2 pod pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed container test-container: STEP: delete the pod Jun 1 13:18:58.402: INFO: Waiting for pod pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed to disappear Jun 1 13:18:58.435: INFO: Pod pod-14f59a7e-cb88-49cb-bc65-77769eb5a1ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:18:58.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8277" for this suite. Jun 1 13:19:04.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:19:04.547: INFO: namespace emptydir-8277 deletion completed in 6.107557483s • [SLOW TEST:12.506 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:19:04.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 1 13:19:04.592: INFO: PodSpec: initContainers in spec.initContainers Jun 1 13:19:56.750: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-371566f9-0b99-4770-a226-2f7aa95a0c16", GenerateName:"", Namespace:"init-container-7585", SelfLink:"/api/v1/namespaces/init-container-7585/pods/pod-init-371566f9-0b99-4770-a226-2f7aa95a0c16", UID:"52f21e90-b102-48a3-a976-71441497e535", ResourceVersion:"14082726", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726614344, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"592976454"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mm6kp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002fd06c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mm6kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mm6kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mm6kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002598b68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b006c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002598bf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002598c10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002598c18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002598c1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726614344, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726614344, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726614344, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726614344, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.21", StartTime:(*v1.Time)(0xc0025bf520), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0025bf600), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00224b8f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dc0a0a31da9a55dbee98d434673348aa47a09290505a9a6a3d7904e549c228f8"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025bf640), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025bf5c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:19:56.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7585" for this suite. Jun 1 13:20:18.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:20:18.963: INFO: namespace init-container-7585 deletion completed in 22.119318916s • [SLOW TEST:74.415 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:20:18.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bf4af69f-75b8-438a-a0f8-265ab3ecdb11 STEP: Creating a pod to test consume configMaps Jun 1 13:20:19.021: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09" in namespace "configmap-6475" to be "success or failure" Jun 1 13:20:19.034: INFO: Pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09": Phase="Pending", Reason="", readiness=false. Elapsed: 12.989511ms Jun 1 13:20:21.038: INFO: Pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016716486s Jun 1 13:20:23.043: INFO: Pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09": Phase="Running", Reason="", readiness=true. Elapsed: 4.021489555s Jun 1 13:20:25.048: INFO: Pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026595816s STEP: Saw pod success Jun 1 13:20:25.048: INFO: Pod "pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09" satisfied condition "success or failure" Jun 1 13:20:25.051: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09 container configmap-volume-test: STEP: delete the pod Jun 1 13:20:25.091: INFO: Waiting for pod pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09 to disappear Jun 1 13:20:25.123: INFO: Pod pod-configmaps-2e7304ef-97a5-4ed9-86be-faaf8e192b09 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:20:25.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6475" for this suite. Jun 1 13:20:31.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:20:31.229: INFO: namespace configmap-6475 deletion completed in 6.10274581s • [SLOW TEST:12.265 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:20:31.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ba385198-8b19-4757-bd86-0bb125d1404e STEP: Creating a pod to test consume secrets Jun 1 13:20:31.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2" in namespace "projected-2038" to be "success or failure" Jun 1 13:20:31.424: INFO: Pod "pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.492774ms Jun 1 13:20:33.427: INFO: Pod "pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007262458s Jun 1 13:20:35.441: INFO: Pod "pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021296988s STEP: Saw pod success Jun 1 13:20:35.442: INFO: Pod "pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2" satisfied condition "success or failure" Jun 1 13:20:35.444: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2 container projected-secret-volume-test: STEP: delete the pod Jun 1 13:20:35.467: INFO: Waiting for pod pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2 to disappear Jun 1 13:20:35.490: INFO: Pod pod-projected-secrets-b39a7e02-bf94-499c-a63d-04d882d3aed2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:20:35.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2038" for this suite. Jun 1 13:20:41.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:20:41.591: INFO: namespace projected-2038 deletion completed in 6.097255241s • [SLOW TEST:10.361 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:20:41.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 1 13:20:41.673: INFO: Waiting up to 5m0s for pod "client-containers-b2534827-c427-437e-899a-508bcf90bb39" in namespace "containers-8219" to be "success or failure" Jun 1 13:20:41.676: INFO: Pod "client-containers-b2534827-c427-437e-899a-508bcf90bb39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.626757ms Jun 1 13:20:43.753: INFO: Pod "client-containers-b2534827-c427-437e-899a-508bcf90bb39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080289303s Jun 1 13:20:45.758: INFO: Pod "client-containers-b2534827-c427-437e-899a-508bcf90bb39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084953379s STEP: Saw pod success Jun 1 13:20:45.758: INFO: Pod "client-containers-b2534827-c427-437e-899a-508bcf90bb39" satisfied condition "success or failure" Jun 1 13:20:45.761: INFO: Trying to get logs from node iruya-worker2 pod client-containers-b2534827-c427-437e-899a-508bcf90bb39 container test-container: STEP: delete the pod Jun 1 13:20:45.809: INFO: Waiting for pod client-containers-b2534827-c427-437e-899a-508bcf90bb39 to disappear Jun 1 13:20:45.820: INFO: Pod client-containers-b2534827-c427-437e-899a-508bcf90bb39 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:20:45.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8219" for this suite. Jun 1 13:20:51.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:20:51.923: INFO: namespace containers-8219 deletion completed in 6.098998316s • [SLOW TEST:10.332 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:20:51.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 1 13:20:52.032: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:52.036: INFO: Number of nodes with available pods: 0 Jun 1 13:20:52.036: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:20:53.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:53.045: INFO: Number of nodes with available pods: 0 Jun 1 13:20:53.045: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:20:54.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:54.070: INFO: Number of nodes with available pods: 0 Jun 1 13:20:54.070: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:20:55.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:55.044: INFO: Number of nodes with available pods: 0 Jun 1 13:20:55.044: INFO: Node iruya-worker is running more than one daemon pod Jun 1 13:20:56.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:56.045: INFO: Number of nodes with available pods: 1 Jun 1 13:20:56.045: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:20:57.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:57.045: INFO: Number of nodes with available pods: 2 Jun 1 13:20:57.045: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 1 13:20:57.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:57.069: INFO: Number of nodes with available pods: 1 Jun 1 13:20:57.069: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:20:58.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:58.078: INFO: Number of nodes with available pods: 1 Jun 1 13:20:58.078: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:20:59.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:20:59.078: INFO: Number of nodes with available pods: 1 Jun 1 13:20:59.078: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:21:00.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:21:00.078: INFO: Number of nodes with available pods: 1 Jun 1 13:21:00.078: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:21:01.078: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:21:01.081: INFO: Number of nodes with available pods: 1 Jun 1 13:21:01.081: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:21:02.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:21:02.077: INFO: Number of nodes with available pods: 1 Jun 1 13:21:02.077: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:21:03.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:21:03.078: INFO: Number of nodes with available pods: 1 Jun 1 13:21:03.078: INFO: Node iruya-worker2 is running more than one daemon pod Jun 1 13:21:04.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 1 13:21:04.079: INFO: Number of nodes with available pods: 2 Jun 1 13:21:04.079: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5509, will wait for the garbage collector to delete the pods Jun 1 13:21:04.142: INFO: Deleting DaemonSet.extensions daemon-set took: 6.606929ms Jun 1 13:21:04.442: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.339523ms Jun 1 13:21:11.946: INFO: Number of nodes with available pods: 0 Jun 1 13:21:11.946: INFO: Number of running nodes: 0, number of available pods: 0 Jun 1 13:21:11.948: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5509/daemonsets","resourceVersion":"14083017"},"items":null} Jun 1 13:21:11.951: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5509/pods","resourceVersion":"14083017"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:21:11.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5509" for this suite. Jun 1 13:21:17.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:21:18.058: INFO: namespace daemonsets-5509 deletion completed in 6.092598045s • [SLOW TEST:26.134 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:21:18.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:21:50.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6279" for this suite. Jun 1 13:21:56.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:21:57.020: INFO: namespace container-runtime-6279 deletion completed in 6.102795613s • [SLOW TEST:38.962 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:21:57.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3266 I0601 13:21:57.113048 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3266, replica count: 1 I0601 13:21:58.163620 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0601 13:21:59.163839 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0601 13:22:00.164106 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0601 13:22:01.164382 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 1 13:22:01.327: INFO: Created: latency-svc-z5rdz Jun 1 13:22:01.350: INFO: Got endpoints: latency-svc-z5rdz [85.521261ms] Jun 1 13:22:01.491: INFO: Created: latency-svc-jlpgg Jun 1 13:22:01.505: INFO: Got endpoints: latency-svc-jlpgg [155.441499ms] Jun 1 13:22:01.556: INFO: Created: latency-svc-lg777 Jun 1 13:22:01.572: INFO: Got endpoints: latency-svc-lg777 [221.738525ms] Jun 1 13:22:01.597: INFO: Created: latency-svc-p7dms Jun 1 13:22:01.614: INFO: Got endpoints: latency-svc-p7dms [264.212397ms] Jun 1 13:22:01.645: INFO: Created: latency-svc-lqzp2 Jun 1 13:22:01.694: INFO: Got endpoints: latency-svc-lqzp2 [344.022757ms] Jun 1 13:22:01.704: INFO: Created: latency-svc-wwphn Jun 1 13:22:01.716: INFO: Got endpoints: latency-svc-wwphn [366.065911ms] Jun 1 13:22:01.748: INFO: Created: latency-svc-w6b6p Jun 1 13:22:01.764: INFO: Got endpoints: latency-svc-w6b6p [414.282076ms] Jun 1 13:22:01.790: INFO: Created: latency-svc-pzl7w Jun 1 13:22:01.837: INFO: Got endpoints: latency-svc-pzl7w [487.53087ms] Jun 1 13:22:01.844: INFO: Created: latency-svc-jrpcv Jun 1 13:22:01.862: INFO: Got endpoints: latency-svc-jrpcv [511.87118ms] Jun 1 13:22:02.024: INFO: Created: latency-svc-hjxqv Jun 1 13:22:02.030: INFO: Got endpoints: latency-svc-hjxqv [679.792815ms] Jun 1 13:22:02.091: INFO: Created: latency-svc-bv4tx Jun 1 13:22:02.120: INFO: Got endpoints: latency-svc-bv4tx [769.79198ms] Jun 1 13:22:02.196: INFO: Created: latency-svc-lplqs Jun 1 13:22:02.222: INFO: Got endpoints: latency-svc-lplqs [872.455319ms] Jun 1 13:22:02.244: INFO: Created: latency-svc-mtc8s Jun 1 13:22:02.258: INFO: Got endpoints: latency-svc-mtc8s [907.944768ms] Jun 1 13:22:02.323: INFO: Created: latency-svc-fwpmr Jun 1 13:22:02.326: INFO: Got endpoints: latency-svc-fwpmr [975.712024ms] Jun 1 13:22:02.366: INFO: Created: latency-svc-ll5mm Jun 1 13:22:02.379: INFO: Got endpoints: latency-svc-ll5mm [1.02917954s] Jun 1 13:22:02.479: INFO: Created: latency-svc-xfkph Jun 1 13:22:02.482: INFO: Got endpoints: latency-svc-xfkph [1.13183254s] Jun 1 13:22:02.515: INFO: Created: latency-svc-9fz69 Jun 1 13:22:02.542: INFO: Got endpoints: latency-svc-9fz69 [1.036737562s] Jun 1 13:22:02.574: INFO: Created: latency-svc-fw8rd Jun 1 13:22:02.634: INFO: Got endpoints: latency-svc-fw8rd [1.062480443s] Jun 1 13:22:02.666: INFO: Created: latency-svc-t8hfr Jun 1 13:22:02.681: INFO: Got endpoints: latency-svc-t8hfr [1.066943654s] Jun 1 13:22:02.708: INFO: Created: latency-svc-nwwrv Jun 1 13:22:02.717: INFO: Got endpoints: latency-svc-nwwrv [1.023108514s] Jun 1 13:22:02.772: INFO: Created: latency-svc-zf6m7 Jun 1 13:22:02.785: INFO: Got endpoints: latency-svc-zf6m7 [1.069328007s] Jun 1 13:22:02.820: INFO: Created: latency-svc-w67gf Jun 1 13:22:02.838: INFO: Got endpoints: latency-svc-w67gf [1.073597179s] Jun 1 13:22:02.928: INFO: Created: latency-svc-66nb2 Jun 1 13:22:02.934: INFO: Got endpoints: latency-svc-66nb2 [1.096782282s] Jun 1 13:22:02.960: INFO: Created: latency-svc-hfk4g Jun 1 13:22:02.977: INFO: Got endpoints: latency-svc-hfk4g [1.115086544s] Jun 1 13:22:03.862: INFO: Created: latency-svc-z76bj Jun 1 13:22:04.964: INFO: Got endpoints: latency-svc-z76bj [2.934421005s] Jun 1 13:22:05.163: INFO: Created: latency-svc-jvlzz Jun 1 13:22:05.170: INFO: Got endpoints: latency-svc-jvlzz [3.050407077s] Jun 1 13:22:05.241: INFO: Created: latency-svc-r94hg Jun 1 13:22:05.328: INFO: Got endpoints: latency-svc-r94hg [3.105928613s] Jun 1 13:22:05.338: INFO: Created: latency-svc-w24zj Jun 1 13:22:05.368: INFO: Got endpoints: latency-svc-w24zj [3.110219429s] Jun 1 13:22:05.427: INFO: Created: latency-svc-6z6lw Jun 1 13:22:05.483: INFO: Got endpoints: latency-svc-6z6lw [3.157316571s] Jun 1 13:22:05.530: INFO: Created: latency-svc-6w5ww Jun 1 13:22:05.543: INFO: Got endpoints: latency-svc-6w5ww [3.16367652s] Jun 1 13:22:05.611: INFO: Created: latency-svc-2b5c7 Jun 1 13:22:05.615: INFO: Got endpoints: latency-svc-2b5c7 [3.13330548s] Jun 1 13:22:05.651: INFO: Created: latency-svc-dzbsw Jun 1 13:22:05.670: INFO: Got endpoints: latency-svc-dzbsw [3.127634365s] Jun 1 13:22:05.699: INFO: Created: latency-svc-92sqm Jun 1 13:22:05.748: INFO: Got endpoints: latency-svc-92sqm [3.113700051s] Jun 1 13:22:05.765: INFO: Created: latency-svc-wl8fn Jun 1 13:22:05.778: INFO: Got endpoints: latency-svc-wl8fn [3.097381151s] Jun 1 13:22:05.811: INFO: Created: latency-svc-ntqq5 Jun 1 13:22:05.952: INFO: Got endpoints: latency-svc-ntqq5 [3.234775886s] Jun 1 13:22:05.955: INFO: Created: latency-svc-qgdpk Jun 1 13:22:06.001: INFO: Got endpoints: latency-svc-qgdpk [3.215920052s] Jun 1 13:22:06.035: INFO: Created: latency-svc-lpklh Jun 1 13:22:06.089: INFO: Got endpoints: latency-svc-lpklh [3.25126662s] Jun 1 13:22:06.123: INFO: Created: latency-svc-rv84k Jun 1 13:22:06.134: INFO: Got endpoints: latency-svc-rv84k [3.199704407s] Jun 1 13:22:06.172: INFO: Created: latency-svc-rjfm8 Jun 1 13:22:06.189: INFO: Got endpoints: latency-svc-rjfm8 [3.211926153s] Jun 1 13:22:06.240: INFO: Created: latency-svc-cr5jc Jun 1 13:22:06.242: INFO: Got endpoints: latency-svc-cr5jc [1.278010427s] Jun 1 13:22:06.278: INFO: Created: latency-svc-89znm Jun 1 13:22:06.297: INFO: Got endpoints: latency-svc-89znm [1.127007753s] Jun 1 13:22:06.326: INFO: Created: latency-svc-twknx Jun 1 13:22:06.376: INFO: Got endpoints: latency-svc-twknx [1.047794842s] Jun 1 13:22:06.394: INFO: Created: latency-svc-9xwsb Jun 1 13:22:06.412: INFO: Got endpoints: latency-svc-9xwsb [1.043390316s] Jun 1 13:22:06.436: INFO: Created: latency-svc-8zzqn Jun 1 13:22:06.454: INFO: Got endpoints: latency-svc-8zzqn [971.131014ms] Jun 1 13:22:06.524: INFO: Created: latency-svc-6mg5z Jun 1 13:22:06.526: INFO: Got endpoints: latency-svc-6mg5z [983.638795ms] Jun 1 13:22:06.560: INFO: Created: latency-svc-d5vmp Jun 1 13:22:06.575: INFO: Got endpoints: latency-svc-d5vmp [959.918357ms] Jun 1 13:22:06.597: INFO: Created: latency-svc-b5sj5 Jun 1 13:22:06.612: INFO: Got endpoints: latency-svc-b5sj5 [942.634416ms] Jun 1 13:22:06.665: INFO: Created: latency-svc-rb7qz Jun 1 13:22:06.667: INFO: Got endpoints: latency-svc-rb7qz [919.261125ms] Jun 1 13:22:06.706: INFO: Created: latency-svc-b4zft Jun 1 13:22:06.721: INFO: Got endpoints: latency-svc-b4zft [942.524074ms] Jun 1 13:22:06.748: INFO: Created: latency-svc-8b4zh Jun 1 13:22:06.802: INFO: Got endpoints: latency-svc-8b4zh [849.536048ms] Jun 1 13:22:06.812: INFO: Created: latency-svc-x7vtn Jun 1 13:22:06.830: INFO: Got endpoints: latency-svc-x7vtn [828.675724ms] Jun 1 13:22:06.860: INFO: Created: latency-svc-wzvgn Jun 1 13:22:06.878: INFO: Got endpoints: latency-svc-wzvgn [789.120702ms] Jun 1 13:22:06.977: INFO: Created: latency-svc-fl69c Jun 1 13:22:06.980: INFO: Got endpoints: latency-svc-fl69c [845.36843ms] Jun 1 13:22:07.152: INFO: Created: latency-svc-zwbpb Jun 1 13:22:07.180: INFO: Got endpoints: latency-svc-zwbpb [990.777712ms] Jun 1 13:22:07.246: INFO: Created: latency-svc-vmlw7 Jun 1 13:22:07.335: INFO: Got endpoints: latency-svc-vmlw7 [1.092284868s] Jun 1 13:22:07.388: INFO: Created: latency-svc-5t4kc Jun 1 13:22:07.456: INFO: Got endpoints: latency-svc-5t4kc [1.158228165s] Jun 1 13:22:07.492: INFO: Created: latency-svc-jgb6j Jun 1 13:22:07.504: INFO: Got endpoints: latency-svc-jgb6j [1.127495789s] Jun 1 13:22:07.586: INFO: Created: latency-svc-nklwh Jun 1 13:22:07.589: INFO: Got endpoints: latency-svc-nklwh [1.177229273s] Jun 1 13:22:07.634: INFO: Created: latency-svc-fk2l6 Jun 1 13:22:07.665: INFO: Got endpoints: latency-svc-fk2l6 [1.210243833s] Jun 1 13:22:07.730: INFO: Created: latency-svc-kv5xg Jun 1 13:22:07.739: INFO: Got endpoints: latency-svc-kv5xg [1.212883606s] Jun 1 13:22:07.775: INFO: Created: latency-svc-rg9vm Jun 1 13:22:07.787: INFO: Got endpoints: latency-svc-rg9vm [1.21216191s] Jun 1 13:22:07.816: INFO: Created: latency-svc-7svf6 Jun 1 13:22:07.886: INFO: Got endpoints: latency-svc-7svf6 [1.273321445s] Jun 1 13:22:07.888: INFO: Created: latency-svc-l6llk Jun 1 13:22:07.896: INFO: Got endpoints: latency-svc-l6llk [1.228725529s] Jun 1 13:22:07.922: INFO: Created: latency-svc-qjc29 Jun 1 13:22:07.939: INFO: Got endpoints: latency-svc-qjc29 [1.217728838s] Jun 1 13:22:07.964: INFO: Created: latency-svc-qpdqn Jun 1 13:22:07.981: INFO: Got endpoints: latency-svc-qpdqn [1.179585337s] Jun 1 13:22:08.035: INFO: Created: latency-svc-6796n Jun 1 13:22:08.048: INFO: Got endpoints: latency-svc-6796n [1.217579303s] Jun 1 13:22:08.073: INFO: Created: latency-svc-r8g9b Jun 1 13:22:08.090: INFO: Got endpoints: latency-svc-r8g9b [1.211848761s] Jun 1 13:22:08.122: INFO: Created: latency-svc-t5ss2 Jun 1 13:22:08.279: INFO: Got endpoints: latency-svc-t5ss2 [1.299093177s] Jun 1 13:22:08.308: INFO: Created: latency-svc-7j2zv Jun 1 13:22:08.320: INFO: Got endpoints: latency-svc-7j2zv [1.139890824s] Jun 1 13:22:08.342: INFO: Created: latency-svc-bw8mb Jun 1 13:22:08.360: INFO: Got endpoints: latency-svc-bw8mb [1.025275873s] Jun 1 13:22:08.407: INFO: Created: latency-svc-l998d Jun 1 13:22:08.419: INFO: Got endpoints: latency-svc-l998d [963.539134ms] Jun 1 13:22:08.451: INFO: Created: latency-svc-t7dmn Jun 1 13:22:08.467: INFO: Got endpoints: latency-svc-t7dmn [963.277405ms] Jun 1 13:22:08.487: INFO: Created: latency-svc-mk7jv Jun 1 13:22:08.501: INFO: Got endpoints: latency-svc-mk7jv [912.275295ms] Jun 1 13:22:08.551: INFO: Created: latency-svc-vvnv6 Jun 1 13:22:08.553: INFO: Got endpoints: latency-svc-vvnv6 [888.434621ms] Jun 1 13:22:08.575: INFO: Created: latency-svc-8dmpv Jun 1 13:22:08.592: INFO: Got endpoints: latency-svc-8dmpv [852.226626ms] Jun 1 13:22:08.618: INFO: Created: latency-svc-c6tht Jun 1 13:22:08.628: INFO: Got endpoints: latency-svc-c6tht [840.342066ms] Jun 1 13:22:08.648: INFO: Created: latency-svc-w6t6k Jun 1 13:22:08.706: INFO: Got endpoints: latency-svc-w6t6k [819.876388ms] Jun 1 13:22:08.708: INFO: Created: latency-svc-ktpzf Jun 1 13:22:08.718: INFO: Got endpoints: latency-svc-ktpzf [822.221399ms] Jun 1 13:22:08.740: INFO: Created: latency-svc-kc7kr Jun 1 13:22:08.755: INFO: Got endpoints: latency-svc-kc7kr [815.889163ms] Jun 1 13:22:08.775: INFO: Created: latency-svc-bxgmh Jun 1 13:22:08.791: INFO: Got endpoints: latency-svc-bxgmh [809.632589ms] Jun 1 13:22:08.843: INFO: Created: latency-svc-9xcfx Jun 1 13:22:08.846: INFO: Got endpoints: latency-svc-9xcfx [797.978084ms] Jun 1 13:22:08.894: INFO: Created: latency-svc-469fj Jun 1 13:22:08.919: INFO: Got endpoints: latency-svc-469fj [828.48778ms] Jun 1 13:22:08.937: INFO: Created: latency-svc-pbgrr Jun 1 13:22:09.042: INFO: Got endpoints: latency-svc-pbgrr [762.936508ms] Jun 1 13:22:09.074: INFO: Created: latency-svc-g5wvp Jun 1 13:22:09.092: INFO: Got endpoints: latency-svc-g5wvp [771.681636ms] Jun 1 13:22:09.123: INFO: Created: latency-svc-s7642 Jun 1 13:22:09.197: INFO: Got endpoints: latency-svc-s7642 [836.742135ms] Jun 1 13:22:09.200: INFO: Created: latency-svc-bf7zp Jun 1 13:22:09.218: INFO: Got endpoints: latency-svc-bf7zp [798.999687ms] Jun 1 13:22:09.243: INFO: Created: latency-svc-7s96x Jun 1 13:22:09.266: INFO: Got endpoints: latency-svc-7s96x [799.13247ms] Jun 1 13:22:09.348: INFO: Created: latency-svc-2lbcp Jun 1 13:22:09.394: INFO: Created: latency-svc-lr9d5 Jun 1 13:22:09.394: INFO: Got endpoints: latency-svc-2lbcp [893.126129ms] Jun 1 13:22:09.423: INFO: Got endpoints: latency-svc-lr9d5 [869.833965ms] Jun 1 13:22:10.291: INFO: Created: latency-svc-bs2d5 Jun 1 13:22:11.011: INFO: Got endpoints: latency-svc-bs2d5 [2.419548032s] Jun 1 13:22:11.047: INFO: Created: latency-svc-2bndc Jun 1 13:22:11.083: INFO: Got endpoints: latency-svc-2bndc [2.455109379s] Jun 1 13:22:11.164: INFO: Created: latency-svc-tb7sx Jun 1 13:22:11.179: INFO: Got endpoints: latency-svc-tb7sx [2.472916596s] Jun 1 13:22:11.202: INFO: Created: latency-svc-9qz9n Jun 1 13:22:11.233: INFO: Got endpoints: latency-svc-9qz9n [2.51463807s] Jun 1 13:22:11.306: INFO: Created: latency-svc-mdjkv Jun 1 13:22:11.350: INFO: Got endpoints: latency-svc-mdjkv [2.595444904s] Jun 1 13:22:11.382: INFO: Created: latency-svc-z4qwd Jun 1 13:22:11.432: INFO: Got endpoints: latency-svc-z4qwd [2.641071174s] Jun 1 13:22:11.471: INFO: Created: latency-svc-s285m Jun 1 13:22:11.486: INFO: Got endpoints: latency-svc-s285m [2.639740514s] Jun 1 13:22:11.506: INFO: Created: latency-svc-qwh8g Jun 1 13:22:11.522: INFO: Got endpoints: latency-svc-qwh8g [2.602947633s] Jun 1 13:22:11.580: INFO: Created: latency-svc-pw5r6 Jun 1 13:22:11.622: INFO: Got endpoints: latency-svc-pw5r6 [2.580378632s] Jun 1 13:22:11.652: INFO: Created: latency-svc-gp86v Jun 1 13:22:11.666: INFO: Got endpoints: latency-svc-gp86v [2.574824922s] Jun 1 13:22:11.706: INFO: Created: latency-svc-45wd7 Jun 1 13:22:11.708: INFO: Got endpoints: latency-svc-45wd7 [2.511357235s] Jun 1 13:22:11.735: INFO: Created: latency-svc-7hmnv Jun 1 13:22:11.751: INFO: Got endpoints: latency-svc-7hmnv [2.532270392s] Jun 1 13:22:11.776: INFO: Created: latency-svc-rxr62 Jun 1 13:22:11.781: INFO: Got endpoints: latency-svc-rxr62 [2.514304182s] Jun 1 13:22:11.800: INFO: Created: latency-svc-5vswn Jun 1 13:22:11.855: INFO: Got endpoints: latency-svc-5vswn [2.460919561s] Jun 1 13:22:11.862: INFO: Created: latency-svc-c27ns Jun 1 13:22:11.878: INFO: Got endpoints: latency-svc-c27ns [2.454574713s] Jun 1 13:22:11.898: INFO: Created: latency-svc-qg5c8 Jun 1 13:22:11.914: INFO: Got endpoints: latency-svc-qg5c8 [902.718233ms] Jun 1 13:22:11.935: INFO: Created: latency-svc-pf4vr Jun 1 13:22:12.011: INFO: Got endpoints: latency-svc-pf4vr [928.225351ms] Jun 1 13:22:12.028: INFO: Created: latency-svc-5t4ds Jun 1 13:22:12.041: INFO: Got endpoints: latency-svc-5t4ds [861.646858ms] Jun 1 13:22:12.074: INFO: Created: latency-svc-k9r9k Jun 1 13:22:12.084: INFO: Got endpoints: latency-svc-k9r9k [851.371465ms] Jun 1 13:22:12.102: INFO: Created: latency-svc-xzpj9 Jun 1 13:22:12.155: INFO: Got endpoints: latency-svc-xzpj9 [804.321592ms] Jun 1 13:22:12.156: INFO: Created: latency-svc-sqjsd Jun 1 13:22:12.162: INFO: Got endpoints: latency-svc-sqjsd [729.423651ms] Jun 1 13:22:12.182: INFO: Created: latency-svc-wnnns Jun 1 13:22:12.198: INFO: Got endpoints: latency-svc-wnnns [712.45659ms] Jun 1 13:22:12.239: INFO: Created: latency-svc-4jd58 Jun 1 13:22:12.317: INFO: Got endpoints: latency-svc-4jd58 [794.64256ms] Jun 1 13:22:12.319: INFO: Created: latency-svc-bqsdw Jun 1 13:22:12.325: INFO: Got endpoints: latency-svc-bqsdw [702.428589ms] Jun 1 13:22:12.390: INFO: Created: latency-svc-z4jtb Jun 1 13:22:12.408: INFO: Got endpoints: latency-svc-z4jtb [741.518877ms] Jun 1 13:22:12.449: INFO: Created: latency-svc-w4262 Jun 1 13:22:12.484: INFO: Got endpoints: latency-svc-w4262 [775.680266ms] Jun 1 13:22:12.527: INFO: Created: latency-svc-zg559 Jun 1 13:22:12.535: INFO: Got endpoints: latency-svc-zg559 [784.085595ms] Jun 1 13:22:12.581: INFO: Created: latency-svc-lfqww Jun 1 13:22:12.583: INFO: Got endpoints: latency-svc-lfqww [801.949957ms] Jun 1 13:22:12.617: INFO: Created: latency-svc-rx2fx Jun 1 13:22:12.625: INFO: Got endpoints: latency-svc-rx2fx [769.516245ms] Jun 1 13:22:12.648: INFO: Created: latency-svc-c6q5m Jun 1 13:22:12.661: INFO: Got endpoints: latency-svc-c6q5m [783.680325ms] Jun 1 13:22:12.712: INFO: Created: latency-svc-6z5bj Jun 1 13:22:12.716: INFO: Got endpoints: latency-svc-6z5bj [801.672192ms] Jun 1 13:22:12.743: INFO: Created: latency-svc-wct9d Jun 1 13:22:12.752: INFO: Got endpoints: latency-svc-wct9d [740.777948ms] Jun 1 13:22:12.772: INFO: Created: latency-svc-hfsf2 Jun 1 13:22:12.783: INFO: Got endpoints: latency-svc-hfsf2 [742.08152ms] Jun 1 13:22:12.886: INFO: Created: latency-svc-xcqnf Jun 1 13:22:12.890: INFO: Got endpoints: latency-svc-xcqnf [805.111138ms] Jun 1 13:22:12.918: INFO: Created: latency-svc-2w86z Jun 1 13:22:12.926: INFO: Got endpoints: latency-svc-2w86z [771.315669ms] Jun 1 13:22:12.946: INFO: Created: latency-svc-mqbnf Jun 1 13:22:12.962: INFO: Got endpoints: latency-svc-mqbnf [800.819259ms] Jun 1 13:22:13.072: INFO: Created: latency-svc-z748b Jun 1 13:22:13.075: INFO: Got endpoints: latency-svc-z748b [876.807367ms] Jun 1 13:22:13.144: INFO: Created: latency-svc-cgplh Jun 1 13:22:13.159: INFO: Got endpoints: latency-svc-cgplh [841.804085ms] Jun 1 13:22:13.227: INFO: Created: latency-svc-kl95p Jun 1 13:22:13.229: INFO: Got endpoints: latency-svc-kl95p [904.582989ms] Jun 1 13:22:13.303: INFO: Created: latency-svc-bl6sx Jun 1 13:22:13.347: INFO: Got endpoints: latency-svc-bl6sx [938.748835ms] Jun 1 13:22:13.378: INFO: Created: latency-svc-27mbm Jun 1 13:22:13.396: INFO: Got endpoints: latency-svc-27mbm [911.585111ms] Jun 1 13:22:13.423: INFO: Created: latency-svc-lc9nl Jun 1 13:22:13.478: INFO: Got endpoints: latency-svc-lc9nl [943.500374ms] Jun 1 13:22:13.501: INFO: Created: latency-svc-6wlpk Jun 1 13:22:13.516: INFO: Got endpoints: latency-svc-6wlpk [933.362391ms] Jun 1 13:22:13.571: INFO: Created: latency-svc-nt5vn Jun 1 13:22:13.610: INFO: Got endpoints: latency-svc-nt5vn [985.311838ms] Jun 1 13:22:13.631: INFO: Created: latency-svc-qhmd4 Jun 1 13:22:13.654: INFO: Got endpoints: latency-svc-qhmd4 [992.719821ms] Jun 1 13:22:13.675: INFO: Created: latency-svc-svrrl Jun 1 13:22:13.698: INFO: Got endpoints: latency-svc-svrrl [982.634363ms] Jun 1 13:22:13.757: INFO: Created: latency-svc-nn4c2 Jun 1 13:22:13.764: INFO: Got endpoints: latency-svc-nn4c2 [1.012252833s] Jun 1 13:22:13.793: INFO: Created: latency-svc-7cpxv Jun 1 13:22:13.805: INFO: Got endpoints: latency-svc-7cpxv [1.022323454s] Jun 1 13:22:13.829: INFO: Created: latency-svc-nzhsw Jun 1 13:22:13.841: INFO: Got endpoints: latency-svc-nzhsw [951.857414ms] Jun 1 13:22:13.891: INFO: Created: latency-svc-dp5rl Jun 1 13:22:13.895: INFO: Got endpoints: latency-svc-dp5rl [969.528579ms] Jun 1 13:22:13.962: INFO: Created: latency-svc-9lb9w Jun 1 13:22:13.974: INFO: Got endpoints: latency-svc-9lb9w [1.011848506s] Jun 1 13:22:14.036: INFO: Created: latency-svc-lzh94 Jun 1 13:22:14.038: INFO: Got endpoints: latency-svc-lzh94 [963.525655ms] Jun 1 13:22:14.068: INFO: Created: latency-svc-hzmrf Jun 1 13:22:14.083: INFO: Got endpoints: latency-svc-hzmrf [923.911464ms] Jun 1 13:22:14.104: INFO: Created: latency-svc-25tb2 Jun 1 13:22:14.134: INFO: Got endpoints: latency-svc-25tb2 [905.00182ms] Jun 1 13:22:14.179: INFO: Created: latency-svc-99bdw Jun 1 13:22:14.185: INFO: Got endpoints: latency-svc-99bdw [837.690448ms] Jun 1 13:22:14.208: INFO: Created: latency-svc-6hq7l Jun 1 13:22:14.221: INFO: Got endpoints: latency-svc-6hq7l [825.516736ms] Jun 1 13:22:14.249: INFO: Created: latency-svc-sb5sh Jun 1 13:22:14.270: INFO: Got endpoints: latency-svc-sb5sh [791.471991ms] Jun 1 13:22:14.323: INFO: Created: latency-svc-67zht Jun 1 13:22:14.330: INFO: Got endpoints: latency-svc-67zht [813.361878ms] Jun 1 13:22:14.350: INFO: Created: latency-svc-rdkhp Jun 1 13:22:14.366: INFO: Got endpoints: latency-svc-rdkhp [755.810724ms] Jun 1 13:22:14.388: INFO: Created: latency-svc-945c6 Jun 1 13:22:14.402: INFO: Got endpoints: latency-svc-945c6 [748.028857ms] Jun 1 13:22:14.460: INFO: Created: latency-svc-pjpwf Jun 1 13:22:14.484: INFO: Got endpoints: latency-svc-pjpwf [785.813057ms] Jun 1 13:22:14.485: INFO: Created: latency-svc-5bt8f Jun 1 13:22:14.506: INFO: Got endpoints: latency-svc-5bt8f [741.89065ms] Jun 1 13:22:14.536: INFO: Created: latency-svc-wptgb Jun 1 13:22:14.553: INFO: Got endpoints: latency-svc-wptgb [748.02296ms] Jun 1 13:22:14.600: INFO: Created: latency-svc-cn6s7 Jun 1 13:22:14.602: INFO: Got endpoints: latency-svc-cn6s7 [760.059667ms] Jun 1 13:22:14.628: INFO: Created: latency-svc-rbnwl Jun 1 13:22:14.643: INFO: Got endpoints: latency-svc-rbnwl [747.71625ms] Jun 1 13:22:14.664: INFO: Created: latency-svc-8cjrd Jun 1 13:22:14.674: INFO: Got endpoints: latency-svc-8cjrd [699.421295ms] Jun 1 13:22:14.696: INFO: Created: latency-svc-qm6gw Jun 1 13:22:14.730: INFO: Got endpoints: latency-svc-qm6gw [691.172184ms] Jun 1 13:22:14.746: INFO: Created: latency-svc-4x8jw Jun 1 13:22:14.758: INFO: Got endpoints: latency-svc-4x8jw [675.748816ms] Jun 1 13:22:14.782: INFO: Created: latency-svc-j8dpv Jun 1 13:22:14.795: INFO: Got endpoints: latency-svc-j8dpv [660.16571ms] Jun 1 13:22:14.818: INFO: Created: latency-svc-d2dhh Jun 1 13:22:14.880: INFO: Got endpoints: latency-svc-d2dhh [694.905671ms] Jun 1 13:22:14.898: INFO: Created: latency-svc-5hdxp Jun 1 13:22:14.909: INFO: Got endpoints: latency-svc-5hdxp [688.108456ms] Jun 1 13:22:14.928: INFO: Created: latency-svc-8sngk Jun 1 13:22:14.940: INFO: Got endpoints: latency-svc-8sngk [669.693996ms] Jun 1 13:22:14.962: INFO: Created: latency-svc-mzwvk Jun 1 13:22:14.978: INFO: Got endpoints: latency-svc-mzwvk [648.683877ms] Jun 1 13:22:15.062: INFO: Created: latency-svc-bwbv2 Jun 1 13:22:15.078: INFO: Got endpoints: latency-svc-bwbv2 [711.827099ms] Jun 1 13:22:15.130: INFO: Created: latency-svc-n6xpq Jun 1 13:22:15.222: INFO: Got endpoints: latency-svc-n6xpq [819.346804ms] Jun 1 13:22:15.262: INFO: Created: latency-svc-ffmwk Jun 1 13:22:15.276: INFO: Got endpoints: latency-svc-ffmwk [791.865441ms] Jun 1 13:22:15.413: INFO: Created: latency-svc-x5hlv Jun 1 13:22:15.426: INFO: Got endpoints: latency-svc-x5hlv [919.801201ms] Jun 1 13:22:15.455: INFO: Created: latency-svc-ggvzd Jun 1 13:22:15.487: INFO: Got endpoints: latency-svc-ggvzd [933.591624ms] Jun 1 13:22:15.551: INFO: Created: latency-svc-6dpbl Jun 1 13:22:15.562: INFO: Got endpoints: latency-svc-6dpbl [960.772444ms] Jun 1 13:22:15.598: INFO: Created: latency-svc-kjpb5 Jun 1 13:22:15.619: INFO: Got endpoints: latency-svc-kjpb5 [975.962713ms] Jun 1 13:22:15.688: INFO: Created: latency-svc-g4k6p Jun 1 13:22:15.720: INFO: Got endpoints: latency-svc-g4k6p [1.04629146s] Jun 1 13:22:15.722: INFO: Created: latency-svc-j8mlj Jun 1 13:22:15.762: INFO: Got endpoints: latency-svc-j8mlj [1.03231097s] Jun 1 13:22:15.832: INFO: Created: latency-svc-9xkxj Jun 1 13:22:15.836: INFO: Got endpoints: latency-svc-9xkxj [1.077516881s] Jun 1 13:22:15.904: INFO: Created: latency-svc-9h9km Jun 1 13:22:15.920: INFO: Got endpoints: latency-svc-9h9km [1.125594703s] Jun 1 13:22:15.964: INFO: Created: latency-svc-8zq7z Jun 1 13:22:15.966: INFO: Got endpoints: latency-svc-8zq7z [1.086615485s] Jun 1 13:22:16.021: INFO: Created: latency-svc-nj74h Jun 1 13:22:16.042: INFO: Got endpoints: latency-svc-nj74h [1.132620093s] Jun 1 13:22:16.102: INFO: Created: latency-svc-vl8z2 Jun 1 13:22:16.104: INFO: Got endpoints: latency-svc-vl8z2 [1.164508738s] Jun 1 13:22:16.132: INFO: Created: latency-svc-qlvx6 Jun 1 13:22:16.149: INFO: Got endpoints: latency-svc-qlvx6 [1.170732332s] Jun 1 13:22:16.170: INFO: Created: latency-svc-4rtgx Jun 1 13:22:16.185: INFO: Got endpoints: latency-svc-4rtgx [1.107001155s] Jun 1 13:22:16.245: INFO: Created: latency-svc-mpxbk Jun 1 13:22:16.247: INFO: Got endpoints: latency-svc-mpxbk [1.025671439s] Jun 1 13:22:16.294: INFO: Created: latency-svc-vh7qp Jun 1 13:22:16.312: INFO: Got endpoints: latency-svc-vh7qp [1.035843301s] Jun 1 13:22:16.342: INFO: Created: latency-svc-fnwgp Jun 1 13:22:16.376: INFO: Got endpoints: latency-svc-fnwgp [950.1352ms] Jun 1 13:22:16.398: INFO: Created: latency-svc-kv7vl Jun 1 13:22:16.414: INFO: Got endpoints: latency-svc-kv7vl [927.454806ms] Jun 1 13:22:16.434: INFO: Created: latency-svc-9cq9l Jun 1 13:22:16.444: INFO: Got endpoints: latency-svc-9cq9l [881.515502ms] Jun 1 13:22:16.470: INFO: Created: latency-svc-9hnqj Jun 1 13:22:16.522: INFO: Got endpoints: latency-svc-9hnqj [902.691644ms] Jun 1 13:22:16.552: INFO: Created: latency-svc-fmghj Jun 1 13:22:16.565: INFO: Got endpoints: latency-svc-fmghj [844.583853ms] Jun 1 13:22:16.588: INFO: Created: latency-svc-9j4xt Jun 1 13:22:16.601: INFO: Got endpoints: latency-svc-9j4xt [838.810311ms] Jun 1 13:22:16.647: INFO: Created: latency-svc-dntm6 Jun 1 13:22:16.649: INFO: Got endpoints: latency-svc-dntm6 [813.465414ms] Jun 1 13:22:16.679: INFO: Created: latency-svc-l92wl Jun 1 13:22:16.692: INFO: Got endpoints: latency-svc-l92wl [771.35817ms] Jun 1 13:22:16.716: INFO: Created: latency-svc-66rfg Jun 1 13:22:16.728: INFO: Got endpoints: latency-svc-66rfg [761.744272ms] Jun 1 13:22:16.784: INFO: Created: latency-svc-kx2zq Jun 1 13:22:16.787: INFO: Got endpoints: latency-svc-kx2zq [745.345439ms] Jun 1 13:22:16.816: INFO: Created: latency-svc-skwfz Jun 1 13:22:16.831: INFO: Got endpoints: latency-svc-skwfz [726.49095ms] Jun 1 13:22:16.852: INFO: Created: latency-svc-jnsjr Jun 1 13:22:16.867: INFO: Got endpoints: latency-svc-jnsjr [717.887117ms] Jun 1 13:22:16.928: INFO: Created: latency-svc-x5q44 Jun 1 13:22:16.931: INFO: Got endpoints: latency-svc-x5q44 [745.623675ms] Jun 1 13:22:16.962: INFO: Created: latency-svc-mtcxr Jun 1 13:22:16.976: INFO: Got endpoints: latency-svc-mtcxr [728.383622ms] Jun 1 13:22:17.014: INFO: Created: latency-svc-mnl6q Jun 1 13:22:17.053: INFO: Got endpoints: latency-svc-mnl6q [740.766952ms] Jun 1 13:22:17.076: INFO: Created: latency-svc-q6qjs Jun 1 13:22:17.090: INFO: Got endpoints: latency-svc-q6qjs [713.340582ms] Jun 1 13:22:17.130: INFO: Created: latency-svc-sp5jr Jun 1 13:22:17.138: INFO: Got endpoints: latency-svc-sp5jr [723.551361ms] Jun 1 13:22:17.797: INFO: Created: latency-svc-chmmn Jun 1 13:22:18.360: INFO: Got endpoints: latency-svc-chmmn [1.915588716s] Jun 1 13:22:18.410: INFO: Created: latency-svc-sfvn8 Jun 1 13:22:18.432: INFO: Got endpoints: latency-svc-sfvn8 [1.910248615s] Jun 1 13:22:18.538: INFO: Created: latency-svc-mxs7c Jun 1 13:22:18.567: INFO: Got endpoints: latency-svc-mxs7c [2.002593453s] Jun 1 13:22:18.608: INFO: Created: latency-svc-dvr7m Jun 1 13:22:18.625: INFO: Got endpoints: latency-svc-dvr7m [2.023959382s] Jun 1 13:22:18.625: INFO: Latencies: [155.441499ms 221.738525ms 264.212397ms 344.022757ms 366.065911ms 414.282076ms 487.53087ms 511.87118ms 648.683877ms 660.16571ms 669.693996ms 675.748816ms 679.792815ms 688.108456ms 691.172184ms 694.905671ms 699.421295ms 702.428589ms 711.827099ms 712.45659ms 713.340582ms 717.887117ms 723.551361ms 726.49095ms 728.383622ms 729.423651ms 740.766952ms 740.777948ms 741.518877ms 741.89065ms 742.08152ms 745.345439ms 745.623675ms 747.71625ms 748.02296ms 748.028857ms 755.810724ms 760.059667ms 761.744272ms 762.936508ms 769.516245ms 769.79198ms 771.315669ms 771.35817ms 771.681636ms 775.680266ms 783.680325ms 784.085595ms 785.813057ms 789.120702ms 791.471991ms 791.865441ms 794.64256ms 797.978084ms 798.999687ms 799.13247ms 800.819259ms 801.672192ms 801.949957ms 804.321592ms 805.111138ms 809.632589ms 813.361878ms 813.465414ms 815.889163ms 819.346804ms 819.876388ms 822.221399ms 825.516736ms 828.48778ms 828.675724ms 836.742135ms 837.690448ms 838.810311ms 840.342066ms 841.804085ms 844.583853ms 845.36843ms 849.536048ms 851.371465ms 852.226626ms 861.646858ms 869.833965ms 872.455319ms 876.807367ms 881.515502ms 888.434621ms 893.126129ms 902.691644ms 902.718233ms 904.582989ms 905.00182ms 907.944768ms 911.585111ms 912.275295ms 919.261125ms 919.801201ms 923.911464ms 927.454806ms 928.225351ms 933.362391ms 933.591624ms 938.748835ms 942.524074ms 942.634416ms 943.500374ms 950.1352ms 951.857414ms 959.918357ms 960.772444ms 963.277405ms 963.525655ms 963.539134ms 969.528579ms 971.131014ms 975.712024ms 975.962713ms 982.634363ms 983.638795ms 985.311838ms 990.777712ms 992.719821ms 1.011848506s 1.012252833s 1.022323454s 1.023108514s 1.025275873s 1.025671439s 1.02917954s 1.03231097s 1.035843301s 1.036737562s 1.043390316s 1.04629146s 1.047794842s 1.062480443s 1.066943654s 1.069328007s 1.073597179s 1.077516881s 1.086615485s 1.092284868s 1.096782282s 1.107001155s 1.115086544s 1.125594703s 1.127007753s 1.127495789s 1.13183254s 1.132620093s 1.139890824s 1.158228165s 1.164508738s 1.170732332s 1.177229273s 1.179585337s 1.210243833s 1.211848761s 1.21216191s 1.212883606s 1.217579303s 1.217728838s 1.228725529s 1.273321445s 1.278010427s 1.299093177s 1.910248615s 1.915588716s 2.002593453s 2.023959382s 2.419548032s 2.454574713s 2.455109379s 2.460919561s 2.472916596s 2.511357235s 2.514304182s 2.51463807s 2.532270392s 2.574824922s 2.580378632s 2.595444904s 2.602947633s 2.639740514s 2.641071174s 2.934421005s 3.050407077s 3.097381151s 3.105928613s 3.110219429s 3.113700051s 3.127634365s 3.13330548s 3.157316571s 3.16367652s 3.199704407s 3.211926153s 3.215920052s 3.234775886s 3.25126662s] Jun 1 13:22:18.625: INFO: 50 %ile: 933.362391ms Jun 1 13:22:18.625: INFO: 90 %ile: 2.580378632s Jun 1 13:22:18.625: INFO: 99 %ile: 3.234775886s Jun 1 13:22:18.625: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:22:18.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3266" for this suite. Jun 1 13:22:42.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:22:42.778: INFO: namespace svc-latency-3266 deletion completed in 24.147487697s • [SLOW TEST:45.758 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:22:42.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-29e7c826-7cee-4907-a905-df7f1138a821 STEP: Creating a pod to test consume secrets Jun 1 13:22:42.866: INFO: Waiting up to 5m0s for pod "pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9" in namespace "secrets-7213" to be "success or failure" Jun 1 13:22:42.879: INFO: Pod "pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.985753ms Jun 1 13:22:44.916: INFO: Pod "pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050254954s Jun 1 13:22:46.928: INFO: Pod "pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062467769s STEP: Saw pod success Jun 1 13:22:46.928: INFO: Pod "pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9" satisfied condition "success or failure" Jun 1 13:22:46.931: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9 container secret-volume-test: STEP: delete the pod Jun 1 13:22:46.966: INFO: Waiting for pod pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9 to disappear Jun 1 13:22:46.983: INFO: Pod pod-secrets-37644434-e748-4f09-83f6-1e808fdc46c9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:22:46.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7213" for this suite. Jun 1 13:22:52.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:22:53.119: INFO: namespace secrets-7213 deletion completed in 6.132859839s • [SLOW TEST:10.340 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:22:53.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-39cf70b3-74e4-40da-8e45-3a7508637331 STEP: Creating a pod to test consume configMaps Jun 1 13:22:53.199: INFO: Waiting up to 5m0s for pod "pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147" in namespace "configmap-5553" to be "success or failure" Jun 1 13:22:53.221: INFO: Pod "pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065034ms Jun 1 13:22:55.225: INFO: Pod "pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025779698s Jun 1 13:22:57.228: INFO: Pod "pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02922788s STEP: Saw pod success Jun 1 13:22:57.228: INFO: Pod "pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147" satisfied condition "success or failure" Jun 1 13:22:57.231: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147 container configmap-volume-test: STEP: delete the pod Jun 1 13:22:57.252: INFO: Waiting for pod pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147 to disappear Jun 1 13:22:57.256: INFO: Pod pod-configmaps-db1d8bda-0861-4f9b-88e4-04df7f41d147 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:22:57.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. Jun 1 13:23:03.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:23:03.360: INFO: namespace configmap-5553 deletion completed in 6.101127043s • [SLOW TEST:10.241 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:23:03.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 1 13:23:07.428: INFO: Pod pod-hostip-23ed31ed-fd8a-4b7e-9444-064d5427893f has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:23:07.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3800" for this suite. Jun 1 13:23:29.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:23:29.527: INFO: namespace pods-3800 deletion completed in 22.095604593s • [SLOW TEST:26.166 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:23:29.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-870af03b-2d24-4ad9-8029-95915f86d187 STEP: Creating a pod to test consume secrets Jun 1 13:23:29.693: INFO: Waiting up to 5m0s for pod "pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0" in namespace "secrets-6951" to be "success or failure" Jun 1 13:23:29.718: INFO: Pod "pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.48696ms Jun 1 13:23:33.031: INFO: Pod "pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.337756384s Jun 1 13:23:35.035: INFO: Pod "pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.342015428s STEP: Saw pod success Jun 1 13:23:35.035: INFO: Pod "pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0" satisfied condition "success or failure" Jun 1 13:23:35.038: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0 container secret-volume-test: STEP: delete the pod Jun 1 13:23:35.057: INFO: Waiting for pod pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0 to disappear Jun 1 13:23:35.113: INFO: Pod pod-secrets-21e7b21e-97ff-4676-bde2-a367c1fb8dd0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:23:35.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6951" for this suite. Jun 1 13:23:41.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:23:41.281: INFO: namespace secrets-6951 deletion completed in 6.163615008s STEP: Destroying namespace "secret-namespace-2253" for this suite. Jun 1 13:23:47.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:23:47.365: INFO: namespace secret-namespace-2253 deletion completed in 6.083880527s • [SLOW TEST:17.837 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:23:47.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 1 13:23:48.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4600' Jun 1 13:23:48.891: INFO: stderr: "" Jun 1 13:23:48.891: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 13:23:48.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4600' Jun 1 13:23:49.014: INFO: stderr: "" Jun 1 13:23:49.014: INFO: stdout: "update-demo-nautilus-8h77d update-demo-nautilus-8lkps " Jun 1 13:23:49.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8h77d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4600' Jun 1 13:23:49.099: INFO: stderr: "" Jun 1 13:23:49.099: INFO: stdout: "" Jun 1 13:23:49.099: INFO: update-demo-nautilus-8h77d is created but not running Jun 1 13:23:54.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4600' Jun 1 13:23:54.212: INFO: stderr: "" Jun 1 13:23:54.212: INFO: stdout: "update-demo-nautilus-8h77d update-demo-nautilus-8lkps " Jun 1 13:23:54.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8h77d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4600' Jun 1 13:23:54.298: INFO: stderr: "" Jun 1 13:23:54.298: INFO: stdout: "true" Jun 1 13:23:54.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8h77d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4600' Jun 1 13:23:54.400: INFO: stderr: "" Jun 1 13:23:54.400: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 13:23:54.400: INFO: validating pod update-demo-nautilus-8h77d Jun 1 13:23:54.409: INFO: got data: { "image": "nautilus.jpg" } Jun 1 13:23:54.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 13:23:54.410: INFO: update-demo-nautilus-8h77d is verified up and running Jun 1 13:23:54.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8lkps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4600' Jun 1 13:23:54.504: INFO: stderr: "" Jun 1 13:23:54.504: INFO: stdout: "true" Jun 1 13:23:54.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8lkps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4600' Jun 1 13:23:54.598: INFO: stderr: "" Jun 1 13:23:54.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 13:23:54.598: INFO: validating pod update-demo-nautilus-8lkps Jun 1 13:23:54.606: INFO: got data: { "image": "nautilus.jpg" } Jun 1 13:23:54.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 13:23:54.606: INFO: update-demo-nautilus-8lkps is verified up and running STEP: using delete to clean up resources Jun 1 13:23:54.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4600' Jun 1 13:23:54.698: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:23:54.698: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 1 13:23:54.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4600' Jun 1 13:23:54.797: INFO: stderr: "No resources found.\n" Jun 1 13:23:54.797: INFO: stdout: "" Jun 1 13:23:54.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4600 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 13:23:54.882: INFO: stderr: "" Jun 1 13:23:54.882: INFO: stdout: "update-demo-nautilus-8h77d\nupdate-demo-nautilus-8lkps\n" Jun 1 13:23:55.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4600' Jun 1 13:23:55.749: INFO: stderr: "No resources found.\n" Jun 1 13:23:55.749: INFO: stdout: "" Jun 1 13:23:55.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4600 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 13:23:55.846: INFO: stderr: "" Jun 1 13:23:55.846: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:23:55.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4600" for this suite. Jun 1 13:24:01.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:24:02.059: INFO: namespace kubectl-4600 deletion completed in 6.147317003s • [SLOW TEST:14.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:24:02.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:24:02.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4329' Jun 1 13:24:02.386: INFO: stderr: "" Jun 1 13:24:02.386: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 1 13:24:02.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4329' Jun 1 13:24:02.658: INFO: stderr: "" Jun 1 13:24:02.658: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 1 13:24:03.663: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:24:03.663: INFO: Found 0 / 1 Jun 1 13:24:04.662: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:24:04.662: INFO: Found 0 / 1 Jun 1 13:24:05.678: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:24:05.678: INFO: Found 1 / 1 Jun 1 13:24:05.678: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 1 13:24:05.681: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:24:05.681: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 1 13:24:05.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-hgntx --namespace=kubectl-4329' Jun 1 13:24:05.783: INFO: stderr: "" Jun 1 13:24:05.783: INFO: stdout: "Name: redis-master-hgntx\nNamespace: kubectl-4329\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Mon, 01 Jun 2020 13:24:02 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.112\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://05d129a9bb1dd18f296432eb5436ad18d4b4433d3f8eedbb43b8d2d0a2dfed4a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 01 Jun 2020 13:24:04 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q64qq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-q64qq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q64qq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4329/redis-master-hgntx to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 0s kubelet, iruya-worker2 Started container redis-master\n" Jun 1 13:24:05.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4329' Jun 1 13:24:05.940: INFO: stderr: "" Jun 1 13:24:05.940: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4329\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-hgntx\n" Jun 1 13:24:05.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4329' Jun 1 13:24:06.039: INFO: stderr: "" Jun 1 13:24:06.039: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4329\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.232.119\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.112:6379\nSession Affinity: None\nEvents: \n" Jun 1 13:24:06.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 1 13:24:06.178: INFO: stderr: "" Jun 1 13:24:06.178: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 01 Jun 2020 13:23:55 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 01 Jun 2020 13:23:55 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 01 Jun 2020 13:23:55 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 01 Jun 2020 13:23:55 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 77d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 77d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 77d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 77d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 77d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 77d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 77d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 1 13:24:06.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4329' Jun 1 13:24:06.271: INFO: stderr: "" Jun 1 13:24:06.271: INFO: stdout: "Name: kubectl-4329\nLabels: e2e-framework=kubectl\n e2e-run=0a15dcd2-06fc-455d-b881-32cae529a883\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:24:06.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4329" for this suite. Jun 1 13:24:28.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:24:28.371: INFO: namespace kubectl-4329 deletion completed in 22.097372557s • [SLOW TEST:26.312 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:24:28.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 1 13:24:28.456: INFO: Waiting up to 5m0s for pod "pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6" in namespace "emptydir-6320" to be "success or failure" Jun 1 13:24:28.471: INFO: Pod "pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.769695ms Jun 1 13:24:30.475: INFO: Pod "pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019016735s Jun 1 13:24:32.480: INFO: Pod "pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02326647s STEP: Saw pod success Jun 1 13:24:32.480: INFO: Pod "pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6" satisfied condition "success or failure" Jun 1 13:24:32.482: INFO: Trying to get logs from node iruya-worker2 pod pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6 container test-container: STEP: delete the pod Jun 1 13:24:32.502: INFO: Waiting for pod pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6 to disappear Jun 1 13:24:32.507: INFO: Pod pod-3db964f7-bfbd-44e9-b6f4-871695ef16c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:24:32.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6320" for this suite. Jun 1 13:24:38.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:24:38.602: INFO: namespace emptydir-6320 deletion completed in 6.091873451s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:24:38.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 1 13:24:38.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4435,SelfLink:/api/v1/namespaces/watch-4435/configmaps/e2e-watch-test-resource-version,UID:bfd0e378-e303-47b4-9d35-2d590251f511,ResourceVersion:14085163,Generation:0,CreationTimestamp:2020-06-01 13:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 1 13:24:38.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4435,SelfLink:/api/v1/namespaces/watch-4435/configmaps/e2e-watch-test-resource-version,UID:bfd0e378-e303-47b4-9d35-2d590251f511,ResourceVersion:14085164,Generation:0,CreationTimestamp:2020-06-01 13:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:24:38.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4435" for this suite. Jun 1 13:24:44.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:24:44.828: INFO: namespace watch-4435 deletion completed in 6.084275199s • [SLOW TEST:6.226 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:24:44.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:24:44.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318" in namespace "projected-2423" to be "success or failure" Jun 1 13:24:44.991: INFO: Pod "downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318": Phase="Pending", Reason="", readiness=false. Elapsed: 12.470797ms Jun 1 13:24:46.996: INFO: Pod "downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016904733s Jun 1 13:24:49.000: INFO: Pod "downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020985464s STEP: Saw pod success Jun 1 13:24:49.000: INFO: Pod "downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318" satisfied condition "success or failure" Jun 1 13:24:49.003: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318 container client-container: STEP: delete the pod Jun 1 13:24:49.027: INFO: Waiting for pod downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318 to disappear Jun 1 13:24:49.031: INFO: Pod downwardapi-volume-e3321b91-a736-4ea3-a1ac-07ab89818318 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:24:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2423" for this suite. Jun 1 13:24:55.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:24:55.137: INFO: namespace projected-2423 deletion completed in 6.103531377s • [SLOW TEST:10.308 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:24:55.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-1dd98c5d-9b82-4ddb-8b2e-008edc538598 STEP: Creating a pod to test consume secrets Jun 1 13:24:55.224: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3" in namespace "projected-3105" to be "success or failure" Jun 1 13:24:55.228: INFO: Pod "pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.874195ms Jun 1 13:24:57.232: INFO: Pod "pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007885783s Jun 1 13:24:59.236: INFO: Pod "pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011927873s STEP: Saw pod success Jun 1 13:24:59.236: INFO: Pod "pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3" satisfied condition "success or failure" Jun 1 13:24:59.239: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3 container projected-secret-volume-test: STEP: delete the pod Jun 1 13:24:59.278: INFO: Waiting for pod pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3 to disappear Jun 1 13:24:59.313: INFO: Pod pod-projected-secrets-9512bb31-9097-44f0-b59e-934791610de3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:24:59.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3105" for this suite. Jun 1 13:25:05.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:25:05.388: INFO: namespace projected-3105 deletion completed in 6.071173429s • [SLOW TEST:10.251 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:25:05.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5354.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5354.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 13:25:15.548: INFO: DNS probes using dns-5354/dns-test-ec9d64b6-00c9-4029-93b0-caee364cfbe3 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:25:15.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5354" for this suite. Jun 1 13:25:24.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:25:24.246: INFO: namespace dns-5354 deletion completed in 8.508591312s • [SLOW TEST:18.858 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:25:24.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 1 13:25:24.962: INFO: Waiting up to 5m0s for pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5" in namespace "var-expansion-9164" to be "success or failure" Jun 1 13:25:25.002: INFO: Pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.925421ms Jun 1 13:25:27.006: INFO: Pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043503777s Jun 1 13:25:29.242: INFO: Pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5": Phase="Running", Reason="", readiness=true. Elapsed: 4.280346099s Jun 1 13:25:31.247: INFO: Pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285081758s STEP: Saw pod success Jun 1 13:25:31.247: INFO: Pod "var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5" satisfied condition "success or failure" Jun 1 13:25:31.250: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5 container dapi-container: STEP: delete the pod Jun 1 13:25:31.482: INFO: Waiting for pod var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5 to disappear Jun 1 13:25:31.577: INFO: Pod var-expansion-38e32cf2-a1a1-42a7-9f54-7edf95198bf5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:25:31.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9164" for this suite. Jun 1 13:25:38.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:25:38.419: INFO: namespace var-expansion-9164 deletion completed in 6.836604896s • [SLOW TEST:14.172 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:25:38.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0601 13:26:09.102070 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 13:26:09.102: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:26:09.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7783" for this suite. Jun 1 13:26:17.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:26:17.503: INFO: namespace gc-7783 deletion completed in 8.398720327s • [SLOW TEST:39.084 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:26:17.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 1 13:26:36.194: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.194: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.228123 6 log.go:172] (0xc0026a8840) (0xc001ee1860) Create stream I0601 13:26:36.228163 6 log.go:172] (0xc0026a8840) (0xc001ee1860) Stream added, broadcasting: 1 I0601 13:26:36.231000 6 log.go:172] (0xc0026a8840) Reply frame received for 1 I0601 13:26:36.231039 6 log.go:172] (0xc0026a8840) (0xc001d735e0) Create stream I0601 13:26:36.231058 6 log.go:172] (0xc0026a8840) (0xc001d735e0) Stream added, broadcasting: 3 I0601 13:26:36.232084 6 log.go:172] (0xc0026a8840) Reply frame received for 3 I0601 13:26:36.232120 6 log.go:172] (0xc0026a8840) (0xc001d73680) Create stream I0601 13:26:36.232133 6 log.go:172] (0xc0026a8840) (0xc001d73680) Stream added, broadcasting: 5 I0601 13:26:36.232963 6 log.go:172] (0xc0026a8840) Reply frame received for 5 I0601 13:26:36.306709 6 log.go:172] (0xc0026a8840) Data frame received for 3 I0601 13:26:36.306748 6 log.go:172] (0xc001d735e0) (3) Data frame handling I0601 13:26:36.306775 6 log.go:172] (0xc001d735e0) (3) Data frame sent I0601 13:26:36.306938 6 log.go:172] (0xc0026a8840) Data frame received for 5 I0601 13:26:36.306972 6 log.go:172] (0xc001d73680) (5) Data frame handling I0601 13:26:36.307224 6 log.go:172] (0xc0026a8840) Data frame received for 3 I0601 13:26:36.307234 6 log.go:172] (0xc001d735e0) (3) Data frame handling I0601 13:26:36.309528 6 log.go:172] (0xc0026a8840) Data frame received for 1 I0601 13:26:36.309547 6 log.go:172] (0xc001ee1860) (1) Data frame handling I0601 13:26:36.309555 6 log.go:172] (0xc001ee1860) (1) Data frame sent I0601 13:26:36.309567 6 log.go:172] (0xc0026a8840) (0xc001ee1860) Stream removed, broadcasting: 1 I0601 13:26:36.309611 6 log.go:172] (0xc0026a8840) Go away received I0601 13:26:36.309662 6 log.go:172] (0xc0026a8840) (0xc001ee1860) Stream removed, broadcasting: 1 I0601 13:26:36.309679 6 log.go:172] (0xc0026a8840) (0xc001d735e0) Stream removed, broadcasting: 3 I0601 13:26:36.309687 6 log.go:172] (0xc0026a8840) (0xc001d73680) Stream removed, broadcasting: 5 Jun 1 13:26:36.309: INFO: Exec stderr: "" Jun 1 13:26:36.309: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.309: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.339501 6 log.go:172] (0xc00084cfd0) (0xc001564500) Create stream I0601 13:26:36.339524 6 log.go:172] (0xc00084cfd0) (0xc001564500) Stream added, broadcasting: 1 I0601 13:26:36.342454 6 log.go:172] (0xc00084cfd0) Reply frame received for 1 I0601 13:26:36.342487 6 log.go:172] (0xc00084cfd0) (0xc001d73720) Create stream I0601 13:26:36.342497 6 log.go:172] (0xc00084cfd0) (0xc001d73720) Stream added, broadcasting: 3 I0601 13:26:36.343485 6 log.go:172] (0xc00084cfd0) Reply frame received for 3 I0601 13:26:36.343512 6 log.go:172] (0xc00084cfd0) (0xc001d737c0) Create stream I0601 13:26:36.343519 6 log.go:172] (0xc00084cfd0) (0xc001d737c0) Stream added, broadcasting: 5 I0601 13:26:36.344432 6 log.go:172] (0xc00084cfd0) Reply frame received for 5 I0601 13:26:36.418536 6 log.go:172] (0xc00084cfd0) Data frame received for 3 I0601 13:26:36.418575 6 log.go:172] (0xc001d73720) (3) Data frame handling I0601 13:26:36.418592 6 log.go:172] (0xc001d73720) (3) Data frame sent I0601 13:26:36.418603 6 log.go:172] (0xc00084cfd0) Data frame received for 3 I0601 13:26:36.418612 6 log.go:172] (0xc001d73720) (3) Data frame handling I0601 13:26:36.418666 6 log.go:172] (0xc00084cfd0) Data frame received for 5 I0601 13:26:36.418693 6 log.go:172] (0xc001d737c0) (5) Data frame handling I0601 13:26:36.420043 6 log.go:172] (0xc00084cfd0) Data frame received for 1 I0601 13:26:36.420069 6 log.go:172] (0xc001564500) (1) Data frame handling I0601 13:26:36.420096 6 log.go:172] (0xc001564500) (1) Data frame sent I0601 13:26:36.420129 6 log.go:172] (0xc00084cfd0) (0xc001564500) Stream removed, broadcasting: 1 I0601 13:26:36.420146 6 log.go:172] (0xc00084cfd0) Go away received I0601 13:26:36.420215 6 log.go:172] (0xc00084cfd0) (0xc001564500) Stream removed, broadcasting: 1 I0601 13:26:36.420232 6 log.go:172] (0xc00084cfd0) (0xc001d73720) Stream removed, broadcasting: 3 I0601 13:26:36.420245 6 log.go:172] (0xc00084cfd0) (0xc001d737c0) Stream removed, broadcasting: 5 Jun 1 13:26:36.420: INFO: Exec stderr: "" Jun 1 13:26:36.420: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.420: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.452043 6 log.go:172] (0xc000d95a20) (0xc001d73ae0) Create stream I0601 13:26:36.452083 6 log.go:172] (0xc000d95a20) (0xc001d73ae0) Stream added, broadcasting: 1 I0601 13:26:36.455027 6 log.go:172] (0xc000d95a20) Reply frame received for 1 I0601 13:26:36.455071 6 log.go:172] (0xc000d95a20) (0xc001ee19a0) Create stream I0601 13:26:36.455091 6 log.go:172] (0xc000d95a20) (0xc001ee19a0) Stream added, broadcasting: 3 I0601 13:26:36.455902 6 log.go:172] (0xc000d95a20) Reply frame received for 3 I0601 13:26:36.455919 6 log.go:172] (0xc000d95a20) (0xc001d73b80) Create stream I0601 13:26:36.455929 6 log.go:172] (0xc000d95a20) (0xc001d73b80) Stream added, broadcasting: 5 I0601 13:26:36.456833 6 log.go:172] (0xc000d95a20) Reply frame received for 5 I0601 13:26:36.508772 6 log.go:172] (0xc000d95a20) Data frame received for 5 I0601 13:26:36.508799 6 log.go:172] (0xc001d73b80) (5) Data frame handling I0601 13:26:36.508821 6 log.go:172] (0xc000d95a20) Data frame received for 3 I0601 13:26:36.508844 6 log.go:172] (0xc001ee19a0) (3) Data frame handling I0601 13:26:36.508861 6 log.go:172] (0xc001ee19a0) (3) Data frame sent I0601 13:26:36.508869 6 log.go:172] (0xc000d95a20) Data frame received for 3 I0601 13:26:36.508875 6 log.go:172] (0xc001ee19a0) (3) Data frame handling I0601 13:26:36.510757 6 log.go:172] (0xc000d95a20) Data frame received for 1 I0601 13:26:36.510776 6 log.go:172] (0xc001d73ae0) (1) Data frame handling I0601 13:26:36.510789 6 log.go:172] (0xc001d73ae0) (1) Data frame sent I0601 13:26:36.510812 6 log.go:172] (0xc000d95a20) (0xc001d73ae0) Stream removed, broadcasting: 1 I0601 13:26:36.510830 6 log.go:172] (0xc000d95a20) Go away received I0601 13:26:36.510962 6 log.go:172] (0xc000d95a20) (0xc001d73ae0) Stream removed, broadcasting: 1 I0601 13:26:36.511004 6 log.go:172] (0xc000d95a20) (0xc001ee19a0) Stream removed, broadcasting: 3 I0601 13:26:36.511021 6 log.go:172] (0xc000d95a20) (0xc001d73b80) Stream removed, broadcasting: 5 Jun 1 13:26:36.511: INFO: Exec stderr: "" Jun 1 13:26:36.511: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.511: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.535074 6 log.go:172] (0xc0026a94a0) (0xc001ee1cc0) Create stream I0601 13:26:36.535104 6 log.go:172] (0xc0026a94a0) (0xc001ee1cc0) Stream added, broadcasting: 1 I0601 13:26:36.537862 6 log.go:172] (0xc0026a94a0) Reply frame received for 1 I0601 13:26:36.537919 6 log.go:172] (0xc0026a94a0) (0xc0015645a0) Create stream I0601 13:26:36.537937 6 log.go:172] (0xc0026a94a0) (0xc0015645a0) Stream added, broadcasting: 3 I0601 13:26:36.538927 6 log.go:172] (0xc0026a94a0) Reply frame received for 3 I0601 13:26:36.538963 6 log.go:172] (0xc0026a94a0) (0xc001e16aa0) Create stream I0601 13:26:36.538976 6 log.go:172] (0xc0026a94a0) (0xc001e16aa0) Stream added, broadcasting: 5 I0601 13:26:36.539913 6 log.go:172] (0xc0026a94a0) Reply frame received for 5 I0601 13:26:36.595518 6 log.go:172] (0xc0026a94a0) Data frame received for 3 I0601 13:26:36.595557 6 log.go:172] (0xc0015645a0) (3) Data frame handling I0601 13:26:36.595585 6 log.go:172] (0xc0015645a0) (3) Data frame sent I0601 13:26:36.595651 6 log.go:172] (0xc0026a94a0) Data frame received for 3 I0601 13:26:36.595681 6 log.go:172] (0xc0015645a0) (3) Data frame handling I0601 13:26:36.595802 6 log.go:172] (0xc0026a94a0) Data frame received for 5 I0601 13:26:36.595816 6 log.go:172] (0xc001e16aa0) (5) Data frame handling I0601 13:26:36.600195 6 log.go:172] (0xc0026a94a0) Data frame received for 1 I0601 13:26:36.600225 6 log.go:172] (0xc001ee1cc0) (1) Data frame handling I0601 13:26:36.600235 6 log.go:172] (0xc001ee1cc0) (1) Data frame sent I0601 13:26:36.600797 6 log.go:172] (0xc0026a94a0) (0xc001ee1cc0) Stream removed, broadcasting: 1 I0601 13:26:36.600900 6 log.go:172] (0xc0026a94a0) (0xc001ee1cc0) Stream removed, broadcasting: 1 I0601 13:26:36.600916 6 log.go:172] (0xc0026a94a0) (0xc0015645a0) Stream removed, broadcasting: 3 I0601 13:26:36.601234 6 log.go:172] (0xc0026a94a0) Go away received I0601 13:26:36.601521 6 log.go:172] (0xc0026a94a0) (0xc001e16aa0) Stream removed, broadcasting: 5 Jun 1 13:26:36.601: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 1 13:26:36.601: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.601: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.722758 6 log.go:172] (0xc000c52dc0) (0xc002b95d60) Create stream I0601 13:26:36.722789 6 log.go:172] (0xc000c52dc0) (0xc002b95d60) Stream added, broadcasting: 1 I0601 13:26:36.725313 6 log.go:172] (0xc000c52dc0) Reply frame received for 1 I0601 13:26:36.725365 6 log.go:172] (0xc000c52dc0) (0xc001e16b40) Create stream I0601 13:26:36.725386 6 log.go:172] (0xc000c52dc0) (0xc001e16b40) Stream added, broadcasting: 3 I0601 13:26:36.726488 6 log.go:172] (0xc000c52dc0) Reply frame received for 3 I0601 13:26:36.726531 6 log.go:172] (0xc000c52dc0) (0xc001e16be0) Create stream I0601 13:26:36.726544 6 log.go:172] (0xc000c52dc0) (0xc001e16be0) Stream added, broadcasting: 5 I0601 13:26:36.727601 6 log.go:172] (0xc000c52dc0) Reply frame received for 5 I0601 13:26:36.791473 6 log.go:172] (0xc000c52dc0) Data frame received for 5 I0601 13:26:36.791530 6 log.go:172] (0xc001e16be0) (5) Data frame handling I0601 13:26:36.791575 6 log.go:172] (0xc000c52dc0) Data frame received for 3 I0601 13:26:36.791605 6 log.go:172] (0xc001e16b40) (3) Data frame handling I0601 13:26:36.791642 6 log.go:172] (0xc001e16b40) (3) Data frame sent I0601 13:26:36.791666 6 log.go:172] (0xc000c52dc0) Data frame received for 3 I0601 13:26:36.791686 6 log.go:172] (0xc001e16b40) (3) Data frame handling I0601 13:26:36.792615 6 log.go:172] (0xc000c52dc0) Data frame received for 1 I0601 13:26:36.792676 6 log.go:172] (0xc002b95d60) (1) Data frame handling I0601 13:26:36.792733 6 log.go:172] (0xc002b95d60) (1) Data frame sent I0601 13:26:36.792765 6 log.go:172] (0xc000c52dc0) (0xc002b95d60) Stream removed, broadcasting: 1 I0601 13:26:36.792790 6 log.go:172] (0xc000c52dc0) Go away received I0601 13:26:36.792917 6 log.go:172] (0xc000c52dc0) (0xc002b95d60) Stream removed, broadcasting: 1 I0601 13:26:36.792947 6 log.go:172] (0xc000c52dc0) (0xc001e16b40) Stream removed, broadcasting: 3 I0601 13:26:36.792958 6 log.go:172] (0xc000c52dc0) (0xc001e16be0) Stream removed, broadcasting: 5 Jun 1 13:26:36.792: INFO: Exec stderr: "" Jun 1 13:26:36.792: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.793: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.825540 6 log.go:172] (0xc002dac630) (0xc001564d20) Create stream I0601 13:26:36.825563 6 log.go:172] (0xc002dac630) (0xc001564d20) Stream added, broadcasting: 1 I0601 13:26:36.827865 6 log.go:172] (0xc002dac630) Reply frame received for 1 I0601 13:26:36.827931 6 log.go:172] (0xc002dac630) (0xc001ee1d60) Create stream I0601 13:26:36.827952 6 log.go:172] (0xc002dac630) (0xc001ee1d60) Stream added, broadcasting: 3 I0601 13:26:36.828937 6 log.go:172] (0xc002dac630) Reply frame received for 3 I0601 13:26:36.828967 6 log.go:172] (0xc002dac630) (0xc001564e60) Create stream I0601 13:26:36.828976 6 log.go:172] (0xc002dac630) (0xc001564e60) Stream added, broadcasting: 5 I0601 13:26:36.830282 6 log.go:172] (0xc002dac630) Reply frame received for 5 I0601 13:26:36.885088 6 log.go:172] (0xc002dac630) Data frame received for 5 I0601 13:26:36.885254 6 log.go:172] (0xc001564e60) (5) Data frame handling I0601 13:26:36.885278 6 log.go:172] (0xc002dac630) Data frame received for 3 I0601 13:26:36.885313 6 log.go:172] (0xc001ee1d60) (3) Data frame handling I0601 13:26:36.885320 6 log.go:172] (0xc001ee1d60) (3) Data frame sent I0601 13:26:36.885327 6 log.go:172] (0xc002dac630) Data frame received for 3 I0601 13:26:36.885331 6 log.go:172] (0xc001ee1d60) (3) Data frame handling I0601 13:26:36.885349 6 log.go:172] (0xc002dac630) Data frame received for 1 I0601 13:26:36.885355 6 log.go:172] (0xc001564d20) (1) Data frame handling I0601 13:26:36.885363 6 log.go:172] (0xc001564d20) (1) Data frame sent I0601 13:26:36.885373 6 log.go:172] (0xc002dac630) (0xc001564d20) Stream removed, broadcasting: 1 I0601 13:26:36.885382 6 log.go:172] (0xc002dac630) Go away received I0601 13:26:36.885552 6 log.go:172] (0xc002dac630) (0xc001564d20) Stream removed, broadcasting: 1 I0601 13:26:36.885583 6 log.go:172] (0xc002dac630) (0xc001ee1d60) Stream removed, broadcasting: 3 I0601 13:26:36.885608 6 log.go:172] (0xc002dac630) (0xc001564e60) Stream removed, broadcasting: 5 Jun 1 13:26:36.885: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 1 13:26:36.885: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.885: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:36.919600 6 log.go:172] (0xc000c53970) (0xc0030a40a0) Create stream I0601 13:26:36.919643 6 log.go:172] (0xc000c53970) (0xc0030a40a0) Stream added, broadcasting: 1 I0601 13:26:36.922745 6 log.go:172] (0xc000c53970) Reply frame received for 1 I0601 13:26:36.922802 6 log.go:172] (0xc000c53970) (0xc001e16c80) Create stream I0601 13:26:36.922826 6 log.go:172] (0xc000c53970) (0xc001e16c80) Stream added, broadcasting: 3 I0601 13:26:36.923721 6 log.go:172] (0xc000c53970) Reply frame received for 3 I0601 13:26:36.923805 6 log.go:172] (0xc000c53970) (0xc0030a4140) Create stream I0601 13:26:36.923830 6 log.go:172] (0xc000c53970) (0xc0030a4140) Stream added, broadcasting: 5 I0601 13:26:36.925003 6 log.go:172] (0xc000c53970) Reply frame received for 5 I0601 13:26:36.978147 6 log.go:172] (0xc000c53970) Data frame received for 5 I0601 13:26:36.978188 6 log.go:172] (0xc0030a4140) (5) Data frame handling I0601 13:26:36.978229 6 log.go:172] (0xc000c53970) Data frame received for 3 I0601 13:26:36.978245 6 log.go:172] (0xc001e16c80) (3) Data frame handling I0601 13:26:36.978269 6 log.go:172] (0xc001e16c80) (3) Data frame sent I0601 13:26:36.978283 6 log.go:172] (0xc000c53970) Data frame received for 3 I0601 13:26:36.978294 6 log.go:172] (0xc001e16c80) (3) Data frame handling I0601 13:26:36.979348 6 log.go:172] (0xc000c53970) Data frame received for 1 I0601 13:26:36.979369 6 log.go:172] (0xc0030a40a0) (1) Data frame handling I0601 13:26:36.979382 6 log.go:172] (0xc0030a40a0) (1) Data frame sent I0601 13:26:36.979390 6 log.go:172] (0xc000c53970) (0xc0030a40a0) Stream removed, broadcasting: 1 I0601 13:26:36.979434 6 log.go:172] (0xc000c53970) Go away received I0601 13:26:36.979474 6 log.go:172] (0xc000c53970) (0xc0030a40a0) Stream removed, broadcasting: 1 I0601 13:26:36.979491 6 log.go:172] (0xc000c53970) (0xc001e16c80) Stream removed, broadcasting: 3 I0601 13:26:36.979500 6 log.go:172] (0xc000c53970) (0xc0030a4140) Stream removed, broadcasting: 5 Jun 1 13:26:36.979: INFO: Exec stderr: "" Jun 1 13:26:36.979: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:36.979: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:37.006685 6 log.go:172] (0xc002e7d4a0) (0xc001e17040) Create stream I0601 13:26:37.006717 6 log.go:172] (0xc002e7d4a0) (0xc001e17040) Stream added, broadcasting: 1 I0601 13:26:37.009412 6 log.go:172] (0xc002e7d4a0) Reply frame received for 1 I0601 13:26:37.009498 6 log.go:172] (0xc002e7d4a0) (0xc001564f00) Create stream I0601 13:26:37.009528 6 log.go:172] (0xc002e7d4a0) (0xc001564f00) Stream added, broadcasting: 3 I0601 13:26:37.010418 6 log.go:172] (0xc002e7d4a0) Reply frame received for 3 I0601 13:26:37.010458 6 log.go:172] (0xc002e7d4a0) (0xc0015650e0) Create stream I0601 13:26:37.010469 6 log.go:172] (0xc002e7d4a0) (0xc0015650e0) Stream added, broadcasting: 5 I0601 13:26:37.011317 6 log.go:172] (0xc002e7d4a0) Reply frame received for 5 I0601 13:26:37.058421 6 log.go:172] (0xc002e7d4a0) Data frame received for 5 I0601 13:26:37.058468 6 log.go:172] (0xc0015650e0) (5) Data frame handling I0601 13:26:37.058495 6 log.go:172] (0xc002e7d4a0) Data frame received for 3 I0601 13:26:37.058509 6 log.go:172] (0xc001564f00) (3) Data frame handling I0601 13:26:37.058530 6 log.go:172] (0xc001564f00) (3) Data frame sent I0601 13:26:37.058546 6 log.go:172] (0xc002e7d4a0) Data frame received for 3 I0601 13:26:37.058558 6 log.go:172] (0xc001564f00) (3) Data frame handling I0601 13:26:37.059901 6 log.go:172] (0xc002e7d4a0) Data frame received for 1 I0601 13:26:37.059927 6 log.go:172] (0xc001e17040) (1) Data frame handling I0601 13:26:37.059945 6 log.go:172] (0xc001e17040) (1) Data frame sent I0601 13:26:37.059965 6 log.go:172] (0xc002e7d4a0) (0xc001e17040) Stream removed, broadcasting: 1 I0601 13:26:37.060008 6 log.go:172] (0xc002e7d4a0) Go away received I0601 13:26:37.060122 6 log.go:172] (0xc002e7d4a0) (0xc001e17040) Stream removed, broadcasting: 1 I0601 13:26:37.060148 6 log.go:172] (0xc002e7d4a0) (0xc001564f00) Stream removed, broadcasting: 3 I0601 13:26:37.060168 6 log.go:172] (0xc002e7d4a0) (0xc0015650e0) Stream removed, broadcasting: 5 Jun 1 13:26:37.060: INFO: Exec stderr: "" Jun 1 13:26:37.060: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:37.060: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:37.097420 6 log.go:172] (0xc002e7dce0) (0xc001e17400) Create stream I0601 13:26:37.097445 6 log.go:172] (0xc002e7dce0) (0xc001e17400) Stream added, broadcasting: 1 I0601 13:26:37.099811 6 log.go:172] (0xc002e7dce0) Reply frame received for 1 I0601 13:26:37.099846 6 log.go:172] (0xc002e7dce0) (0xc001d73c20) Create stream I0601 13:26:37.099863 6 log.go:172] (0xc002e7dce0) (0xc001d73c20) Stream added, broadcasting: 3 I0601 13:26:37.100975 6 log.go:172] (0xc002e7dce0) Reply frame received for 3 I0601 13:26:37.101017 6 log.go:172] (0xc002e7dce0) (0xc001d73cc0) Create stream I0601 13:26:37.101037 6 log.go:172] (0xc002e7dce0) (0xc001d73cc0) Stream added, broadcasting: 5 I0601 13:26:37.102130 6 log.go:172] (0xc002e7dce0) Reply frame received for 5 I0601 13:26:37.160923 6 log.go:172] (0xc002e7dce0) Data frame received for 5 I0601 13:26:37.160960 6 log.go:172] (0xc001d73cc0) (5) Data frame handling I0601 13:26:37.160981 6 log.go:172] (0xc002e7dce0) Data frame received for 3 I0601 13:26:37.160991 6 log.go:172] (0xc001d73c20) (3) Data frame handling I0601 13:26:37.161001 6 log.go:172] (0xc001d73c20) (3) Data frame sent I0601 13:26:37.161011 6 log.go:172] (0xc002e7dce0) Data frame received for 3 I0601 13:26:37.161020 6 log.go:172] (0xc001d73c20) (3) Data frame handling I0601 13:26:37.162586 6 log.go:172] (0xc002e7dce0) Data frame received for 1 I0601 13:26:37.162605 6 log.go:172] (0xc001e17400) (1) Data frame handling I0601 13:26:37.162613 6 log.go:172] (0xc001e17400) (1) Data frame sent I0601 13:26:37.162622 6 log.go:172] (0xc002e7dce0) (0xc001e17400) Stream removed, broadcasting: 1 I0601 13:26:37.162696 6 log.go:172] (0xc002e7dce0) (0xc001e17400) Stream removed, broadcasting: 1 I0601 13:26:37.162711 6 log.go:172] (0xc002e7dce0) (0xc001d73c20) Stream removed, broadcasting: 3 I0601 13:26:37.162717 6 log.go:172] (0xc002e7dce0) (0xc001d73cc0) Stream removed, broadcasting: 5 Jun 1 13:26:37.162: INFO: Exec stderr: "" Jun 1 13:26:37.162: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5976 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:26:37.162: INFO: >>> kubeConfig: /root/.kube/config I0601 13:26:37.164407 6 log.go:172] (0xc002e7dce0) Go away received I0601 13:26:37.188474 6 log.go:172] (0xc002b32a50) (0xc001ee1ea0) Create stream I0601 13:26:37.188507 6 log.go:172] (0xc002b32a50) (0xc001ee1ea0) Stream added, broadcasting: 1 I0601 13:26:37.191320 6 log.go:172] (0xc002b32a50) Reply frame received for 1 I0601 13:26:37.191362 6 log.go:172] (0xc002b32a50) (0xc002d90000) Create stream I0601 13:26:37.191375 6 log.go:172] (0xc002b32a50) (0xc002d90000) Stream added, broadcasting: 3 I0601 13:26:37.192104 6 log.go:172] (0xc002b32a50) Reply frame received for 3 I0601 13:26:37.192124 6 log.go:172] (0xc002b32a50) (0xc001e174a0) Create stream I0601 13:26:37.192130 6 log.go:172] (0xc002b32a50) (0xc001e174a0) Stream added, broadcasting: 5 I0601 13:26:37.192847 6 log.go:172] (0xc002b32a50) Reply frame received for 5 I0601 13:26:37.243945 6 log.go:172] (0xc002b32a50) Data frame received for 3 I0601 13:26:37.243996 6 log.go:172] (0xc002d90000) (3) Data frame handling I0601 13:26:37.244035 6 log.go:172] (0xc002d90000) (3) Data frame sent I0601 13:26:37.244053 6 log.go:172] (0xc002b32a50) Data frame received for 3 I0601 13:26:37.244090 6 log.go:172] (0xc002b32a50) Data frame received for 5 I0601 13:26:37.244139 6 log.go:172] (0xc001e174a0) (5) Data frame handling I0601 13:26:37.244171 6 log.go:172] (0xc002d90000) (3) Data frame handling I0601 13:26:37.245736 6 log.go:172] (0xc002b32a50) Data frame received for 1 I0601 13:26:37.245768 6 log.go:172] (0xc001ee1ea0) (1) Data frame handling I0601 13:26:37.245784 6 log.go:172] (0xc001ee1ea0) (1) Data frame sent I0601 13:26:37.245805 6 log.go:172] (0xc002b32a50) (0xc001ee1ea0) Stream removed, broadcasting: 1 I0601 13:26:37.245822 6 log.go:172] (0xc002b32a50) Go away received I0601 13:26:37.245984 6 log.go:172] (0xc002b32a50) (0xc001ee1ea0) Stream removed, broadcasting: 1 I0601 13:26:37.246010 6 log.go:172] (0xc002b32a50) (0xc002d90000) Stream removed, broadcasting: 3 I0601 13:26:37.246022 6 log.go:172] (0xc002b32a50) (0xc001e174a0) Stream removed, broadcasting: 5 Jun 1 13:26:37.246: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:26:37.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5976" for this suite. Jun 1 13:27:25.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:27:25.434: INFO: namespace e2e-kubelet-etc-hosts-5976 deletion completed in 48.183001599s • [SLOW TEST:67.930 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:27:25.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:27:25.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3297" for this suite. Jun 1 13:30:51.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:30:51.885: INFO: namespace pods-3297 deletion completed in 3m26.24784264s • [SLOW TEST:206.451 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:30:51.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 1 13:30:52.090: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086136,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 1 13:30:52.090: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086136,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 1 13:31:02.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086156,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 1 13:31:02.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086156,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 1 13:31:12.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086177,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 1 13:31:12.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086177,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 1 13:31:22.113: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086197,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 1 13:31:22.113: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-a,UID:dd056251-f2a7-40a4-94dc-5bf2fec1847b,ResourceVersion:14086197,Generation:0,CreationTimestamp:2020-06-01 13:30:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 1 13:31:32.120: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-b,UID:d4727445-1f5c-49ef-85bc-902c856bfacc,ResourceVersion:14086217,Generation:0,CreationTimestamp:2020-06-01 13:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 1 13:31:32.120: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-b,UID:d4727445-1f5c-49ef-85bc-902c856bfacc,ResourceVersion:14086217,Generation:0,CreationTimestamp:2020-06-01 13:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 1 13:31:42.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-b,UID:d4727445-1f5c-49ef-85bc-902c856bfacc,ResourceVersion:14086237,Generation:0,CreationTimestamp:2020-06-01 13:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 1 13:31:42.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4747,SelfLink:/api/v1/namespaces/watch-4747/configmaps/e2e-watch-test-configmap-b,UID:d4727445-1f5c-49ef-85bc-902c856bfacc,ResourceVersion:14086237,Generation:0,CreationTimestamp:2020-06-01 13:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:31:52.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4747" for this suite. Jun 1 13:31:58.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:31:58.284: INFO: namespace watch-4747 deletion completed in 6.142787261s • [SLOW TEST:66.399 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:31:58.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 1 13:31:58.485: INFO: Waiting up to 5m0s for pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302" in namespace "containers-1942" to be "success or failure" Jun 1 13:31:58.524: INFO: Pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302": Phase="Pending", Reason="", readiness=false. Elapsed: 38.285419ms Jun 1 13:32:00.676: INFO: Pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19043701s Jun 1 13:32:02.866: INFO: Pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380227426s Jun 1 13:32:04.870: INFO: Pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384343762s STEP: Saw pod success Jun 1 13:32:04.870: INFO: Pod "client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302" satisfied condition "success or failure" Jun 1 13:32:04.872: INFO: Trying to get logs from node iruya-worker2 pod client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302 container test-container: STEP: delete the pod Jun 1 13:32:04.989: INFO: Waiting for pod client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302 to disappear Jun 1 13:32:04.995: INFO: Pod client-containers-e0176c44-56ba-476b-a1e5-b25b5dcff302 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:32:04.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1942" for this suite. Jun 1 13:32:11.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:32:11.115: INFO: namespace containers-1942 deletion completed in 6.116149537s • [SLOW TEST:12.829 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:32:11.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 1 13:32:11.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:32:24.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2073" for this suite. Jun 1 13:32:32.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:32:32.524: INFO: namespace init-container-2073 deletion completed in 8.136534042s • [SLOW TEST:21.409 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:32:32.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-01ccb3fd-f2c1-4fa5-9fd8-2f0b2119e9e7 STEP: Creating a pod to test consume secrets Jun 1 13:32:32.885: INFO: Waiting up to 5m0s for pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e" in namespace "secrets-6581" to be "success or failure" Jun 1 13:32:32.931: INFO: Pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.274255ms Jun 1 13:32:35.040: INFO: Pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154380908s Jun 1 13:32:37.043: INFO: Pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157783434s Jun 1 13:32:39.048: INFO: Pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162614569s STEP: Saw pod success Jun 1 13:32:39.048: INFO: Pod "pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e" satisfied condition "success or failure" Jun 1 13:32:39.051: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e container secret-env-test: STEP: delete the pod Jun 1 13:32:39.356: INFO: Waiting for pod pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e to disappear Jun 1 13:32:39.481: INFO: Pod pod-secrets-5fd20bbc-92b3-4ba7-bd49-f986fbe7546e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:32:39.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6581" for this suite. Jun 1 13:32:45.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:32:45.574: INFO: namespace secrets-6581 deletion completed in 6.088758484s • [SLOW TEST:13.050 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:32:45.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:32:45.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4363" for this suite. Jun 1 13:32:51.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:32:51.940: INFO: namespace services-4363 deletion completed in 6.179540019s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.366 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:32:51.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 1 13:32:58.677: INFO: Successfully updated pod "pod-update-8e208cc6-d24a-42d4-bc3f-748251e32f39" STEP: verifying the updated pod is in kubernetes Jun 1 13:32:58.689: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:32:58.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-888" for this suite. Jun 1 13:33:22.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:33:22.795: INFO: namespace pods-888 deletion completed in 24.102638375s • [SLOW TEST:30.855 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:33:22.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-64a28462-dac3-496e-8eee-8ac641860834 STEP: Creating secret with name secret-projected-all-test-volume-6b8d3b87-9f9a-4b68-ae45-56c3e703dfef STEP: Creating a pod to test Check all projections for projected volume plugin Jun 1 13:33:22.993: INFO: Waiting up to 5m0s for pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b" in namespace "projected-4626" to be "success or failure" Jun 1 13:33:23.046: INFO: Pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.021684ms Jun 1 13:33:25.050: INFO: Pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056974814s Jun 1 13:33:27.118: INFO: Pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125061027s Jun 1 13:33:29.123: INFO: Pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129750737s STEP: Saw pod success Jun 1 13:33:29.123: INFO: Pod "projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b" satisfied condition "success or failure" Jun 1 13:33:29.125: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b container projected-all-volume-test: STEP: delete the pod Jun 1 13:33:29.174: INFO: Waiting for pod projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b to disappear Jun 1 13:33:29.182: INFO: Pod projected-volume-5f5eeab8-580c-408a-a494-a70c461bc32b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:33:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4626" for this suite. Jun 1 13:33:35.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:33:35.315: INFO: namespace projected-4626 deletion completed in 6.129467318s • [SLOW TEST:12.520 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:33:35.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:33:35.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67" in namespace "projected-585" to be "success or failure" Jun 1 13:33:35.482: INFO: Pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67": Phase="Pending", Reason="", readiness=false. Elapsed: 3.289075ms Jun 1 13:33:37.486: INFO: Pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308568s Jun 1 13:33:39.592: INFO: Pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113056853s Jun 1 13:33:41.595: INFO: Pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116899599s STEP: Saw pod success Jun 1 13:33:41.595: INFO: Pod "downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67" satisfied condition "success or failure" Jun 1 13:33:41.598: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67 container client-container: STEP: delete the pod Jun 1 13:33:41.627: INFO: Waiting for pod downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67 to disappear Jun 1 13:33:41.680: INFO: Pod downwardapi-volume-970cae57-587e-47a8-a28c-e9d6b2416f67 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:33:41.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-585" for this suite. Jun 1 13:33:47.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:33:47.837: INFO: namespace projected-585 deletion completed in 6.152738119s • [SLOW TEST:12.522 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:33:47.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 1 13:33:54.663: INFO: Successfully updated pod "pod-update-activedeadlineseconds-58d1d643-ddaf-40d8-91ef-1a49f93afeaf" Jun 1 13:33:54.663: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-58d1d643-ddaf-40d8-91ef-1a49f93afeaf" in namespace "pods-7291" to be "terminated due to deadline exceeded" Jun 1 13:33:54.772: INFO: Pod "pod-update-activedeadlineseconds-58d1d643-ddaf-40d8-91ef-1a49f93afeaf": Phase="Running", Reason="", readiness=true. Elapsed: 108.61634ms Jun 1 13:33:56.776: INFO: Pod "pod-update-activedeadlineseconds-58d1d643-ddaf-40d8-91ef-1a49f93afeaf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.112818888s Jun 1 13:33:56.776: INFO: Pod "pod-update-activedeadlineseconds-58d1d643-ddaf-40d8-91ef-1a49f93afeaf" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:33:56.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7291" for this suite. Jun 1 13:34:02.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:34:02.901: INFO: namespace pods-7291 deletion completed in 6.120325984s • [SLOW TEST:15.064 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:34:02.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:34:03.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c" in namespace "projected-4776" to be "success or failure" Jun 1 13:34:03.153: INFO: Pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092682ms Jun 1 13:34:05.157: INFO: Pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012014286s Jun 1 13:34:07.161: INFO: Pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015543653s Jun 1 13:34:09.166: INFO: Pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020809475s STEP: Saw pod success Jun 1 13:34:09.166: INFO: Pod "downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c" satisfied condition "success or failure" Jun 1 13:34:09.169: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c container client-container: STEP: delete the pod Jun 1 13:34:09.353: INFO: Waiting for pod downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c to disappear Jun 1 13:34:09.495: INFO: Pod downwardapi-volume-5d41115b-db35-40bb-9421-26652e92ae3c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:34:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4776" for this suite. Jun 1 13:34:15.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:34:15.644: INFO: namespace projected-4776 deletion completed in 6.144844004s • [SLOW TEST:12.743 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:34:15.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 1 13:34:15.822: INFO: Waiting up to 5m0s for pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4" in namespace "emptydir-3573" to be "success or failure" Jun 1 13:34:15.824: INFO: Pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475676ms Jun 1 13:34:17.828: INFO: Pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005932395s Jun 1 13:34:19.832: INFO: Pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01020459s Jun 1 13:34:21.836: INFO: Pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013941177s STEP: Saw pod success Jun 1 13:34:21.836: INFO: Pod "pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4" satisfied condition "success or failure" Jun 1 13:34:21.838: INFO: Trying to get logs from node iruya-worker2 pod pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4 container test-container: STEP: delete the pod Jun 1 13:34:21.923: INFO: Waiting for pod pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4 to disappear Jun 1 13:34:21.962: INFO: Pod pod-73a45279-70d4-4d33-a1c8-7d68be8a9ee4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:34:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3573" for this suite. Jun 1 13:34:27.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:34:28.103: INFO: namespace emptydir-3573 deletion completed in 6.136747252s • [SLOW TEST:12.458 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:34:28.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:34:34.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9069" for this suite. Jun 1 13:35:16.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:35:16.488: INFO: namespace kubelet-test-9069 deletion completed in 42.178208276s • [SLOW TEST:48.385 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:35:16.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-de1efe6e-304b-4e5e-96de-5477a56bf5c2 in namespace container-probe-1937 Jun 1 13:35:22.729: INFO: Started pod busybox-de1efe6e-304b-4e5e-96de-5477a56bf5c2 in namespace container-probe-1937 STEP: checking the pod's current state and verifying that restartCount is present Jun 1 13:35:22.732: INFO: Initial restart count of pod busybox-de1efe6e-304b-4e5e-96de-5477a56bf5c2 is 0 Jun 1 13:36:19.232: INFO: Restart count of pod container-probe-1937/busybox-de1efe6e-304b-4e5e-96de-5477a56bf5c2 is now 1 (56.500347979s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:36:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1937" for this suite. Jun 1 13:36:25.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:36:25.473: INFO: namespace container-probe-1937 deletion completed in 6.123952225s • [SLOW TEST:68.984 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:36:25.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:36:26.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305" in namespace "projected-8932" to be "success or failure" Jun 1 13:36:26.420: INFO: Pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305": Phase="Pending", Reason="", readiness=false. Elapsed: 38.449216ms Jun 1 13:36:28.423: INFO: Pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042123072s Jun 1 13:36:30.480: INFO: Pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098679839s Jun 1 13:36:32.484: INFO: Pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102608172s STEP: Saw pod success Jun 1 13:36:32.484: INFO: Pod "downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305" satisfied condition "success or failure" Jun 1 13:36:32.487: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305 container client-container: STEP: delete the pod Jun 1 13:36:32.595: INFO: Waiting for pod downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305 to disappear Jun 1 13:36:32.642: INFO: Pod downwardapi-volume-13bc5c32-fb97-4dea-8f60-a6270659c305 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:36:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8932" for this suite. Jun 1 13:36:38.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:36:38.890: INFO: namespace projected-8932 deletion completed in 6.244094894s • [SLOW TEST:13.417 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:36:38.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:36:39.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f" in namespace "projected-9989" to be "success or failure" Jun 1 13:36:39.192: INFO: Pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 84.694297ms Jun 1 13:36:41.668: INFO: Pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56085786s Jun 1 13:36:43.673: INFO: Pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565467214s Jun 1 13:36:45.678: INFO: Pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.570102999s STEP: Saw pod success Jun 1 13:36:45.678: INFO: Pod "downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f" satisfied condition "success or failure" Jun 1 13:36:45.750: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f container client-container: STEP: delete the pod Jun 1 13:36:45.805: INFO: Waiting for pod downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f to disappear Jun 1 13:36:45.814: INFO: Pod downwardapi-volume-0828071f-5a15-4e01-8212-76bf303c8a4f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:36:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9989" for this suite. Jun 1 13:36:51.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:36:51.898: INFO: namespace projected-9989 deletion completed in 6.081208868s • [SLOW TEST:13.009 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:36:51.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 1 13:36:52.154: INFO: Waiting up to 5m0s for pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9" in namespace "emptydir-6899" to be "success or failure" Jun 1 13:36:52.216: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9": Phase="Pending", Reason="", readiness=false. Elapsed: 62.220497ms Jun 1 13:36:54.220: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065901107s Jun 1 13:36:56.225: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07082436s Jun 1 13:36:58.229: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9": Phase="Running", Reason="", readiness=true. Elapsed: 6.075302848s Jun 1 13:37:00.234: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07972556s STEP: Saw pod success Jun 1 13:37:00.234: INFO: Pod "pod-93cbbae3-a65f-4b50-931b-552c373653a9" satisfied condition "success or failure" Jun 1 13:37:00.236: INFO: Trying to get logs from node iruya-worker pod pod-93cbbae3-a65f-4b50-931b-552c373653a9 container test-container: STEP: delete the pod Jun 1 13:37:00.291: INFO: Waiting for pod pod-93cbbae3-a65f-4b50-931b-552c373653a9 to disappear Jun 1 13:37:00.313: INFO: Pod pod-93cbbae3-a65f-4b50-931b-552c373653a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:37:00.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6899" for this suite. Jun 1 13:37:08.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:37:08.452: INFO: namespace emptydir-6899 deletion completed in 8.13555287s • [SLOW TEST:16.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:37:08.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 1 13:37:08.683: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 1 13:37:13.732: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:37:14.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3377" for this suite. Jun 1 13:37:22.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:37:22.927: INFO: namespace replication-controller-3377 deletion completed in 8.132162691s • [SLOW TEST:14.475 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:37:22.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:37:23.349: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4cd05822-87fd-4398-931e-b52c41aa9cdf", Controller:(*bool)(0xc002a64e92), BlockOwnerDeletion:(*bool)(0xc002a64e93)}} Jun 1 13:37:23.456: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e21d2d4d-d8e4-4e2f-972c-71d079afd278", Controller:(*bool)(0xc002cbf552), BlockOwnerDeletion:(*bool)(0xc002cbf553)}} Jun 1 13:37:23.462: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ba86760e-8d52-4e46-82a6-4e311af23423", Controller:(*bool)(0xc002cbf6da), BlockOwnerDeletion:(*bool)(0xc002cbf6db)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:37:28.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5346" for this suite. Jun 1 13:37:34.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:37:34.648: INFO: namespace gc-5346 deletion completed in 6.126575115s • [SLOW TEST:11.720 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:37:34.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 1 13:37:34.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8004' Jun 1 13:37:38.168: INFO: stderr: "" Jun 1 13:37:38.168: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 1 13:37:39.173: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:39.173: INFO: Found 0 / 1 Jun 1 13:37:40.426: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:40.427: INFO: Found 0 / 1 Jun 1 13:37:41.176: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:41.176: INFO: Found 0 / 1 Jun 1 13:37:42.217: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:42.217: INFO: Found 0 / 1 Jun 1 13:37:43.271: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:43.271: INFO: Found 0 / 1 Jun 1 13:37:44.181: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:44.181: INFO: Found 0 / 1 Jun 1 13:37:45.173: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:45.173: INFO: Found 1 / 1 Jun 1 13:37:45.173: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 1 13:37:45.176: INFO: Selector matched 1 pods for map[app:redis] Jun 1 13:37:45.176: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 1 13:37:45.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004' Jun 1 13:37:45.280: INFO: stderr: "" Jun 1 13:37:45.280: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jun 13:37:44.216 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jun 13:37:44.216 # Server started, Redis version 3.2.12\n1:M 01 Jun 13:37:44.216 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jun 13:37:44.216 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 1 13:37:45.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004 --tail=1' Jun 1 13:37:45.418: INFO: stderr: "" Jun 1 13:37:45.418: INFO: stdout: "1:M 01 Jun 13:37:44.216 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 1 13:37:45.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004 --limit-bytes=1' Jun 1 13:37:45.514: INFO: stderr: "" Jun 1 13:37:45.514: INFO: stdout: " " STEP: exposing timestamps Jun 1 13:37:45.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004 --tail=1 --timestamps' Jun 1 13:37:45.612: INFO: stderr: "" Jun 1 13:37:45.612: INFO: stdout: "2020-06-01T13:37:44.216510252Z 1:M 01 Jun 13:37:44.216 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 1 13:37:48.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004 --since=1s' Jun 1 13:37:48.270: INFO: stderr: "" Jun 1 13:37:48.270: INFO: stdout: "" Jun 1 13:37:48.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m799 redis-master --namespace=kubectl-8004 --since=24h' Jun 1 13:37:48.368: INFO: stderr: "" Jun 1 13:37:48.368: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jun 13:37:44.216 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jun 13:37:44.216 # Server started, Redis version 3.2.12\n1:M 01 Jun 13:37:44.216 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jun 13:37:44.216 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 1 13:37:48.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8004' Jun 1 13:37:48.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:37:48.499: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 1 13:37:48.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8004' Jun 1 13:37:48.605: INFO: stderr: "No resources found.\n" Jun 1 13:37:48.605: INFO: stdout: "" Jun 1 13:37:48.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8004 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 13:37:48.762: INFO: stderr: "" Jun 1 13:37:48.762: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:37:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8004" for this suite. Jun 1 13:38:10.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:38:10.878: INFO: namespace kubectl-8004 deletion completed in 22.111765471s • [SLOW TEST:36.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:38:10.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-829d072f-a617-4ab6-9b05-1fb2269bd19e in namespace container-probe-8724 Jun 1 13:38:17.152: INFO: Started pod liveness-829d072f-a617-4ab6-9b05-1fb2269bd19e in namespace container-probe-8724 STEP: checking the pod's current state and verifying that restartCount is present Jun 1 13:38:17.155: INFO: Initial restart count of pod liveness-829d072f-a617-4ab6-9b05-1fb2269bd19e is 0 Jun 1 13:38:43.362: INFO: Restart count of pod container-probe-8724/liveness-829d072f-a617-4ab6-9b05-1fb2269bd19e is now 1 (26.207303397s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:38:43.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8724" for this suite. Jun 1 13:38:49.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:38:49.686: INFO: namespace container-probe-8724 deletion completed in 6.167362638s • [SLOW TEST:38.807 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:38:49.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:38:49.822: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.868461ms) Jun 1 13:38:49.825: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.243002ms) Jun 1 13:38:49.827: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.379011ms) Jun 1 13:38:49.830: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.722727ms) Jun 1 13:38:49.833: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.399544ms) Jun 1 13:38:49.836: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.969607ms) Jun 1 13:38:49.838: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.652309ms) Jun 1 13:38:49.841: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.80185ms) Jun 1 13:38:49.843: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.152705ms) Jun 1 13:38:49.845: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.238996ms) Jun 1 13:38:49.848: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.185585ms) Jun 1 13:38:49.851: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.973854ms) Jun 1 13:38:49.853: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.364742ms) Jun 1 13:38:49.873: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 19.969589ms) Jun 1 13:38:49.877: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.441248ms) Jun 1 13:38:49.880: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.070785ms) Jun 1 13:38:49.883: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.197492ms) Jun 1 13:38:49.886: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.400611ms) Jun 1 13:38:49.889: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.751744ms) Jun 1 13:38:49.891: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.337029ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:38:49.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6248" for this suite. Jun 1 13:38:56.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:38:56.123: INFO: namespace proxy-6248 deletion completed in 6.227873597s • [SLOW TEST:6.436 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:38:56.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:38:56.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3" in namespace "downward-api-8452" to be "success or failure" Jun 1 13:38:56.316: INFO: Pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.436027ms Jun 1 13:38:58.442: INFO: Pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148194734s Jun 1 13:39:00.446: INFO: Pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152018992s Jun 1 13:39:02.451: INFO: Pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.156454048s STEP: Saw pod success Jun 1 13:39:02.451: INFO: Pod "downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3" satisfied condition "success or failure" Jun 1 13:39:02.454: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3 container client-container: STEP: delete the pod Jun 1 13:39:02.663: INFO: Waiting for pod downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3 to disappear Jun 1 13:39:02.714: INFO: Pod downwardapi-volume-8b878b79-8ff0-4ed9-a075-8b47603f13a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:39:02.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8452" for this suite. Jun 1 13:39:08.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:39:08.915: INFO: namespace downward-api-8452 deletion completed in 6.197559399s • [SLOW TEST:12.792 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:39:08.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4581 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4581 to expose endpoints map[] Jun 1 13:39:09.147: INFO: Get endpoints failed (79.915283ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 1 13:39:10.151: INFO: successfully validated that service multi-endpoint-test in namespace services-4581 exposes endpoints map[] (1.083935774s elapsed) STEP: Creating pod pod1 in namespace services-4581 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4581 to expose endpoints map[pod1:[100]] Jun 1 13:39:14.641: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.48332885s elapsed, will retry) Jun 1 13:39:16.757: INFO: successfully validated that service multi-endpoint-test in namespace services-4581 exposes endpoints map[pod1:[100]] (6.599897454s elapsed) STEP: Creating pod pod2 in namespace services-4581 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4581 to expose endpoints map[pod1:[100] pod2:[101]] Jun 1 13:39:20.950: INFO: Unexpected endpoints: found map[73dfea87-2444-49a6-9ced-9a00a400d6d6:[100]], expected map[pod1:[100] pod2:[101]] (4.147656666s elapsed, will retry) Jun 1 13:39:21.959: INFO: successfully validated that service multi-endpoint-test in namespace services-4581 exposes endpoints map[pod1:[100] pod2:[101]] (5.156776587s elapsed) STEP: Deleting pod pod1 in namespace services-4581 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4581 to expose endpoints map[pod2:[101]] Jun 1 13:39:23.076: INFO: successfully validated that service multi-endpoint-test in namespace services-4581 exposes endpoints map[pod2:[101]] (1.112366027s elapsed) STEP: Deleting pod pod2 in namespace services-4581 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4581 to expose endpoints map[] Jun 1 13:39:24.292: INFO: successfully validated that service multi-endpoint-test in namespace services-4581 exposes endpoints map[] (1.212141656s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:39:24.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4581" for this suite. Jun 1 13:39:46.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:39:46.815: INFO: namespace services-4581 deletion completed in 22.204820443s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:37.900 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:39:46.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9658.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9658.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9658.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9658.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9658.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9658.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 13:39:55.053: INFO: DNS probes using dns-9658/dns-test-6b2fdfd4-3636-4336-8935-786f886ea50f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:39:55.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9658" for this suite. Jun 1 13:40:01.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:40:01.446: INFO: namespace dns-9658 deletion completed in 6.279570314s • [SLOW TEST:14.630 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:40:01.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9167b1d3-9e92-4b6b-a6ba-7e794e837313 STEP: Creating a pod to test consume configMaps Jun 1 13:40:01.634: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21" in namespace "projected-6135" to be "success or failure" Jun 1 13:40:01.684: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21": Phase="Pending", Reason="", readiness=false. Elapsed: 49.28866ms Jun 1 13:40:03.688: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053327852s Jun 1 13:40:06.058: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424244004s Jun 1 13:40:08.063: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21": Phase="Running", Reason="", readiness=true. Elapsed: 6.428281844s Jun 1 13:40:10.067: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.432675431s STEP: Saw pod success Jun 1 13:40:10.067: INFO: Pod "pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21" satisfied condition "success or failure" Jun 1 13:40:10.070: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21 container projected-configmap-volume-test: STEP: delete the pod Jun 1 13:40:10.112: INFO: Waiting for pod pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21 to disappear Jun 1 13:40:10.120: INFO: Pod pod-projected-configmaps-b3269dfa-c6db-4ada-9a6b-c1535bb42b21 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:40:10.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6135" for this suite. Jun 1 13:40:16.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:40:16.264: INFO: namespace projected-6135 deletion completed in 6.141068753s • [SLOW TEST:14.817 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:40:16.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 1 13:40:16.409: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 1 13:40:16.490: INFO: Waiting for terminating namespaces to be deleted... Jun 1 13:40:16.591: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 1 13:40:16.598: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.598: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 13:40:16.598: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.598: INFO: Container kindnet-cni ready: true, restart count 2 Jun 1 13:40:16.598: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 1 13:40:16.606: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.606: INFO: Container coredns ready: true, restart count 0 Jun 1 13:40:16.606: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.606: INFO: Container coredns ready: true, restart count 0 Jun 1 13:40:16.606: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.606: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 13:40:16.606: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 1 13:40:16.606: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fc02074f-ae2a-4946-a001-57240e7d5d8f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fc02074f-ae2a-4946-a001-57240e7d5d8f off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fc02074f-ae2a-4946-a001-57240e7d5d8f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:40:26.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-342" for this suite. Jun 1 13:40:46.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:40:47.003: INFO: namespace sched-pred-342 deletion completed in 20.108804428s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:30.739 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:40:47.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6959 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 1 13:40:47.148: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 1 13:41:13.525: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.44 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:41:13.525: INFO: >>> kubeConfig: /root/.kube/config I0601 13:41:13.647022 6 log.go:172] (0xc000c0e370) (0xc0017e0820) Create stream I0601 13:41:13.647097 6 log.go:172] (0xc000c0e370) (0xc0017e0820) Stream added, broadcasting: 1 I0601 13:41:13.649610 6 log.go:172] (0xc000c0e370) Reply frame received for 1 I0601 13:41:13.649640 6 log.go:172] (0xc000c0e370) (0xc0030ee780) Create stream I0601 13:41:13.649650 6 log.go:172] (0xc000c0e370) (0xc0030ee780) Stream added, broadcasting: 3 I0601 13:41:13.650517 6 log.go:172] (0xc000c0e370) Reply frame received for 3 I0601 13:41:13.650550 6 log.go:172] (0xc000c0e370) (0xc0017e08c0) Create stream I0601 13:41:13.650563 6 log.go:172] (0xc000c0e370) (0xc0017e08c0) Stream added, broadcasting: 5 I0601 13:41:13.651366 6 log.go:172] (0xc000c0e370) Reply frame received for 5 I0601 13:41:14.772515 6 log.go:172] (0xc000c0e370) Data frame received for 5 I0601 13:41:14.772553 6 log.go:172] (0xc0017e08c0) (5) Data frame handling I0601 13:41:14.772587 6 log.go:172] (0xc000c0e370) Data frame received for 3 I0601 13:41:14.772628 6 log.go:172] (0xc0030ee780) (3) Data frame handling I0601 13:41:14.772649 6 log.go:172] (0xc0030ee780) (3) Data frame sent I0601 13:41:14.772665 6 log.go:172] (0xc000c0e370) Data frame received for 3 I0601 13:41:14.772676 6 log.go:172] (0xc0030ee780) (3) Data frame handling I0601 13:41:14.774764 6 log.go:172] (0xc000c0e370) Data frame received for 1 I0601 13:41:14.774785 6 log.go:172] (0xc0017e0820) (1) Data frame handling I0601 13:41:14.774798 6 log.go:172] (0xc0017e0820) (1) Data frame sent I0601 13:41:14.774812 6 log.go:172] (0xc000c0e370) (0xc0017e0820) Stream removed, broadcasting: 1 I0601 13:41:14.774905 6 log.go:172] (0xc000c0e370) (0xc0017e0820) Stream removed, broadcasting: 1 I0601 13:41:14.774941 6 log.go:172] (0xc000c0e370) (0xc0030ee780) Stream removed, broadcasting: 3 I0601 13:41:14.775003 6 log.go:172] (0xc000c0e370) Go away received I0601 13:41:14.775053 6 log.go:172] (0xc000c0e370) (0xc0017e08c0) Stream removed, broadcasting: 5 Jun 1 13:41:14.775: INFO: Found all expected endpoints: [netserver-0] Jun 1 13:41:14.778: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.134 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:41:14.778: INFO: >>> kubeConfig: /root/.kube/config I0601 13:41:14.808561 6 log.go:172] (0xc000c0f080) (0xc0017e0e60) Create stream I0601 13:41:14.808582 6 log.go:172] (0xc000c0f080) (0xc0017e0e60) Stream added, broadcasting: 1 I0601 13:41:14.821743 6 log.go:172] (0xc000c0f080) Reply frame received for 1 I0601 13:41:14.821803 6 log.go:172] (0xc000c0f080) (0xc0020ef7c0) Create stream I0601 13:41:14.821818 6 log.go:172] (0xc000c0f080) (0xc0020ef7c0) Stream added, broadcasting: 3 I0601 13:41:14.827352 6 log.go:172] (0xc000c0f080) Reply frame received for 3 I0601 13:41:14.827383 6 log.go:172] (0xc000c0f080) (0xc0030a5ea0) Create stream I0601 13:41:14.827390 6 log.go:172] (0xc000c0f080) (0xc0030a5ea0) Stream added, broadcasting: 5 I0601 13:41:14.828089 6 log.go:172] (0xc000c0f080) Reply frame received for 5 I0601 13:41:15.887835 6 log.go:172] (0xc000c0f080) Data frame received for 5 I0601 13:41:15.887897 6 log.go:172] (0xc0030a5ea0) (5) Data frame handling I0601 13:41:15.887937 6 log.go:172] (0xc000c0f080) Data frame received for 3 I0601 13:41:15.887953 6 log.go:172] (0xc0020ef7c0) (3) Data frame handling I0601 13:41:15.887976 6 log.go:172] (0xc0020ef7c0) (3) Data frame sent I0601 13:41:15.888001 6 log.go:172] (0xc000c0f080) Data frame received for 3 I0601 13:41:15.888011 6 log.go:172] (0xc0020ef7c0) (3) Data frame handling I0601 13:41:15.889590 6 log.go:172] (0xc000c0f080) Data frame received for 1 I0601 13:41:15.889628 6 log.go:172] (0xc0017e0e60) (1) Data frame handling I0601 13:41:15.889781 6 log.go:172] (0xc0017e0e60) (1) Data frame sent I0601 13:41:15.889802 6 log.go:172] (0xc000c0f080) (0xc0017e0e60) Stream removed, broadcasting: 1 I0601 13:41:15.889826 6 log.go:172] (0xc000c0f080) Go away received I0601 13:41:15.889922 6 log.go:172] (0xc000c0f080) (0xc0017e0e60) Stream removed, broadcasting: 1 I0601 13:41:15.889946 6 log.go:172] (0xc000c0f080) (0xc0020ef7c0) Stream removed, broadcasting: 3 I0601 13:41:15.889961 6 log.go:172] (0xc000c0f080) (0xc0030a5ea0) Stream removed, broadcasting: 5 Jun 1 13:41:15.889: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:41:15.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6959" for this suite. Jun 1 13:41:41.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:41:42.010: INFO: namespace pod-network-test-6959 deletion completed in 26.116176462s • [SLOW TEST:55.006 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:41:42.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:41:42.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 1 13:41:42.338: INFO: stderr: "" Jun 1 13:41:42.338: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:41:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3921" for this suite. Jun 1 13:41:48.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:41:48.437: INFO: namespace kubectl-3921 deletion completed in 6.095096093s • [SLOW TEST:6.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:41:48.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 1 13:41:53.801: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:41:54.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3514" for this suite. Jun 1 13:42:00.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:42:00.604: INFO: namespace container-runtime-3514 deletion completed in 6.177583784s • [SLOW TEST:12.166 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:42:00.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:42:00.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6" in namespace "projected-7883" to be "success or failure" Jun 1 13:42:00.906: INFO: Pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.252635ms Jun 1 13:42:03.006: INFO: Pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123238724s Jun 1 13:42:05.011: INFO: Pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128214797s Jun 1 13:42:07.054: INFO: Pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.170657938s STEP: Saw pod success Jun 1 13:42:07.054: INFO: Pod "downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6" satisfied condition "success or failure" Jun 1 13:42:07.057: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6 container client-container: STEP: delete the pod Jun 1 13:42:07.476: INFO: Waiting for pod downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6 to disappear Jun 1 13:42:07.520: INFO: Pod downwardapi-volume-10cd8fb3-baab-44b0-b94d-35fc5a268ba6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:42:07.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7883" for this suite. Jun 1 13:42:13.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:42:13.691: INFO: namespace projected-7883 deletion completed in 6.166467834s • [SLOW TEST:13.087 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:42:13.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9980.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9980.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9980.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9980.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.222.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.222.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.222.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.222.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9980.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9980.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9980.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9980.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9980.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9980.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.222.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.222.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.222.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.222.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 13:42:22.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.251: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.254: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.276: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.285: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:22.309: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:27.314: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.321: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.324: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.345: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.352: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.355: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:27.368: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:32.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.319: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.322: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.325: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.346: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.348: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.351: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.354: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:32.372: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:37.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.390: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.393: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.410: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.416: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:37.435: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:42.320: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.327: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.349: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.351: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.354: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.356: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:42.374: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:47.324: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.327: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.356: INFO: Unable to read jessie_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.440: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:47.469: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_udp@dns-test-service.dns-9980.svc.cluster.local jessie_tcp@dns-test-service.dns-9980.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:52.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:52.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local from pod dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08: the server could not find the requested resource (get pods dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08) Jun 1 13:42:52.372: INFO: Lookups using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 failed for: [wheezy_udp@dns-test-service.dns-9980.svc.cluster.local wheezy_tcp@dns-test-service.dns-9980.svc.cluster.local] Jun 1 13:42:57.414: INFO: DNS probes using dns-9980/dns-test-aebb9fbd-0179-4f20-9694-33e4c024ad08 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:42:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9980" for this suite. Jun 1 13:43:05.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:43:05.705: INFO: namespace dns-9980 deletion completed in 6.290340266s • [SLOW TEST:52.014 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:43:05.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 1 13:43:12.010: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:43:12.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-945" for this suite. Jun 1 13:43:18.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:43:18.307: INFO: namespace container-runtime-945 deletion completed in 6.164785315s • [SLOW TEST:12.601 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:43:18.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 1 13:43:25.020: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5591 pod-service-account-7b7bc74c-6753-473d-a8e1-c55d769c8ef1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 1 13:43:25.228: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5591 pod-service-account-7b7bc74c-6753-473d-a8e1-c55d769c8ef1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 1 13:43:25.445: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5591 pod-service-account-7b7bc74c-6753-473d-a8e1-c55d769c8ef1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:43:25.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5591" for this suite. Jun 1 13:43:31.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:43:31.853: INFO: namespace svcaccounts-5591 deletion completed in 6.139396523s • [SLOW TEST:13.546 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:43:31.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-d026fd53-bc5c-4e89-8957-8e08dc141838 STEP: Creating a pod to test consume secrets Jun 1 13:43:32.103: INFO: Waiting up to 5m0s for pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808" in namespace "secrets-6809" to be "success or failure" Jun 1 13:43:32.126: INFO: Pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808": Phase="Pending", Reason="", readiness=false. Elapsed: 23.523209ms Jun 1 13:43:34.163: INFO: Pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060491605s Jun 1 13:43:36.175: INFO: Pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072059201s Jun 1 13:43:38.204: INFO: Pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10155053s STEP: Saw pod success Jun 1 13:43:38.204: INFO: Pod "pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808" satisfied condition "success or failure" Jun 1 13:43:38.208: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808 container secret-volume-test: STEP: delete the pod Jun 1 13:43:38.303: INFO: Waiting for pod pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808 to disappear Jun 1 13:43:38.595: INFO: Pod pod-secrets-e5a2bc8a-103c-4c06-94c3-ac26790ff808 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:43:38.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6809" for this suite. Jun 1 13:43:44.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:43:44.892: INFO: namespace secrets-6809 deletion completed in 6.293271859s • [SLOW TEST:13.039 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:43:44.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2743 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 1 13:43:45.995: INFO: Found 0 stateful pods, waiting for 3 Jun 1 13:43:56.000: INFO: Found 2 stateful pods, waiting for 3 Jun 1 13:44:06.000: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:44:06.000: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:44:06.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 1 13:44:06.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2743 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 13:44:06.324: INFO: stderr: "I0601 13:44:06.128049 1766 log.go:172] (0xc000aca160) (0xc00068e3c0) Create stream\nI0601 13:44:06.128095 1766 log.go:172] (0xc000aca160) (0xc00068e3c0) Stream added, broadcasting: 1\nI0601 13:44:06.130542 1766 log.go:172] (0xc000aca160) Reply frame received for 1\nI0601 13:44:06.130603 1766 log.go:172] (0xc000aca160) (0xc00068e500) Create stream\nI0601 13:44:06.130616 1766 log.go:172] (0xc000aca160) (0xc00068e500) Stream added, broadcasting: 3\nI0601 13:44:06.131383 1766 log.go:172] (0xc000aca160) Reply frame received for 3\nI0601 13:44:06.131413 1766 log.go:172] (0xc000aca160) (0xc000300000) Create stream\nI0601 13:44:06.131422 1766 log.go:172] (0xc000aca160) (0xc000300000) Stream added, broadcasting: 5\nI0601 13:44:06.132100 1766 log.go:172] (0xc000aca160) Reply frame received for 5\nI0601 13:44:06.277693 1766 log.go:172] (0xc000aca160) Data frame received for 5\nI0601 13:44:06.277722 1766 log.go:172] (0xc000300000) (5) Data frame handling\nI0601 13:44:06.277740 1766 log.go:172] (0xc000300000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 13:44:06.316895 1766 log.go:172] (0xc000aca160) Data frame received for 3\nI0601 13:44:06.316923 1766 log.go:172] (0xc00068e500) (3) Data frame handling\nI0601 13:44:06.317055 1766 log.go:172] (0xc00068e500) (3) Data frame sent\nI0601 13:44:06.317328 1766 log.go:172] (0xc000aca160) Data frame received for 3\nI0601 13:44:06.317348 1766 log.go:172] (0xc00068e500) (3) Data frame handling\nI0601 13:44:06.317949 1766 log.go:172] (0xc000aca160) Data frame received for 5\nI0601 13:44:06.317981 1766 log.go:172] (0xc000300000) (5) Data frame handling\nI0601 13:44:06.319686 1766 log.go:172] (0xc000aca160) Data frame received for 1\nI0601 13:44:06.319717 1766 log.go:172] (0xc00068e3c0) (1) Data frame handling\nI0601 13:44:06.319744 1766 log.go:172] (0xc00068e3c0) (1) Data frame sent\nI0601 13:44:06.319772 1766 log.go:172] (0xc000aca160) (0xc00068e3c0) Stream removed, broadcasting: 1\nI0601 13:44:06.319802 1766 log.go:172] (0xc000aca160) Go away received\nI0601 13:44:06.320142 1766 log.go:172] (0xc000aca160) (0xc00068e3c0) Stream removed, broadcasting: 1\nI0601 13:44:06.320160 1766 log.go:172] (0xc000aca160) (0xc00068e500) Stream removed, broadcasting: 3\nI0601 13:44:06.320170 1766 log.go:172] (0xc000aca160) (0xc000300000) Stream removed, broadcasting: 5\n" Jun 1 13:44:06.324: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 13:44:06.324: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 1 13:44:16.358: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 1 13:44:26.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2743 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:44:26.983: INFO: stderr: "I0601 13:44:26.892005 1786 log.go:172] (0xc000116dc0) (0xc000814640) Create stream\nI0601 13:44:26.892060 1786 log.go:172] (0xc000116dc0) (0xc000814640) Stream added, broadcasting: 1\nI0601 13:44:26.893956 1786 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0601 13:44:26.893988 1786 log.go:172] (0xc000116dc0) (0xc0008e6000) Create stream\nI0601 13:44:26.893994 1786 log.go:172] (0xc000116dc0) (0xc0008e6000) Stream added, broadcasting: 3\nI0601 13:44:26.894674 1786 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0601 13:44:26.894703 1786 log.go:172] (0xc000116dc0) (0xc00090c000) Create stream\nI0601 13:44:26.894714 1786 log.go:172] (0xc000116dc0) (0xc00090c000) Stream added, broadcasting: 5\nI0601 13:44:26.895349 1786 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0601 13:44:26.977528 1786 log.go:172] (0xc000116dc0) Data frame received for 5\nI0601 13:44:26.977552 1786 log.go:172] (0xc00090c000) (5) Data frame handling\nI0601 13:44:26.977561 1786 log.go:172] (0xc00090c000) (5) Data frame sent\nI0601 13:44:26.977566 1786 log.go:172] (0xc000116dc0) Data frame received for 5\nI0601 13:44:26.977572 1786 log.go:172] (0xc00090c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 13:44:26.977589 1786 log.go:172] (0xc000116dc0) Data frame received for 3\nI0601 13:44:26.977594 1786 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0601 13:44:26.977600 1786 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0601 13:44:26.977606 1786 log.go:172] (0xc000116dc0) Data frame received for 3\nI0601 13:44:26.977611 1786 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0601 13:44:26.978573 1786 log.go:172] (0xc000116dc0) Data frame received for 1\nI0601 13:44:26.978644 1786 log.go:172] (0xc000814640) (1) Data frame handling\nI0601 13:44:26.978675 1786 log.go:172] (0xc000814640) (1) Data frame sent\nI0601 13:44:26.978694 1786 log.go:172] (0xc000116dc0) (0xc000814640) Stream removed, broadcasting: 1\nI0601 13:44:26.978711 1786 log.go:172] (0xc000116dc0) Go away received\nI0601 13:44:26.979114 1786 log.go:172] (0xc000116dc0) (0xc000814640) Stream removed, broadcasting: 1\nI0601 13:44:26.979134 1786 log.go:172] (0xc000116dc0) (0xc0008e6000) Stream removed, broadcasting: 3\nI0601 13:44:26.979148 1786 log.go:172] (0xc000116dc0) (0xc00090c000) Stream removed, broadcasting: 5\n" Jun 1 13:44:26.984: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 13:44:26.984: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 13:44:37.018: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:44:37.018: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:44:37.018: INFO: Waiting for Pod statefulset-2743/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:44:47.104: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:44:47.104: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 1 13:44:57.186: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update STEP: Rolling back to a previous revision Jun 1 13:45:07.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2743 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 13:45:07.350: INFO: stderr: "I0601 13:45:07.150311 1807 log.go:172] (0xc0006beb00) (0xc000344820) Create stream\nI0601 13:45:07.150366 1807 log.go:172] (0xc0006beb00) (0xc000344820) Stream added, broadcasting: 1\nI0601 13:45:07.154621 1807 log.go:172] (0xc0006beb00) Reply frame received for 1\nI0601 13:45:07.154667 1807 log.go:172] (0xc0006beb00) (0xc000344000) Create stream\nI0601 13:45:07.154691 1807 log.go:172] (0xc0006beb00) (0xc000344000) Stream added, broadcasting: 3\nI0601 13:45:07.155435 1807 log.go:172] (0xc0006beb00) Reply frame received for 3\nI0601 13:45:07.155559 1807 log.go:172] (0xc0006beb00) (0xc0005da1e0) Create stream\nI0601 13:45:07.155568 1807 log.go:172] (0xc0006beb00) (0xc0005da1e0) Stream added, broadcasting: 5\nI0601 13:45:07.156411 1807 log.go:172] (0xc0006beb00) Reply frame received for 5\nI0601 13:45:07.296757 1807 log.go:172] (0xc0006beb00) Data frame received for 5\nI0601 13:45:07.296787 1807 log.go:172] (0xc0005da1e0) (5) Data frame handling\nI0601 13:45:07.296808 1807 log.go:172] (0xc0005da1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 13:45:07.343153 1807 log.go:172] (0xc0006beb00) Data frame received for 3\nI0601 13:45:07.343190 1807 log.go:172] (0xc000344000) (3) Data frame handling\nI0601 13:45:07.343214 1807 log.go:172] (0xc000344000) (3) Data frame sent\nI0601 13:45:07.343231 1807 log.go:172] (0xc0006beb00) Data frame received for 3\nI0601 13:45:07.343251 1807 log.go:172] (0xc000344000) (3) Data frame handling\nI0601 13:45:07.343348 1807 log.go:172] (0xc0006beb00) Data frame received for 5\nI0601 13:45:07.343367 1807 log.go:172] (0xc0005da1e0) (5) Data frame handling\nI0601 13:45:07.345020 1807 log.go:172] (0xc0006beb00) Data frame received for 1\nI0601 13:45:07.345046 1807 log.go:172] (0xc000344820) (1) Data frame handling\nI0601 13:45:07.345054 1807 log.go:172] (0xc000344820) (1) Data frame sent\nI0601 13:45:07.345065 1807 log.go:172] (0xc0006beb00) (0xc000344820) Stream removed, broadcasting: 1\nI0601 13:45:07.345473 1807 log.go:172] (0xc0006beb00) Go away received\nI0601 13:45:07.345532 1807 log.go:172] (0xc0006beb00) (0xc000344820) Stream removed, broadcasting: 1\nI0601 13:45:07.345552 1807 log.go:172] (0xc0006beb00) (0xc000344000) Stream removed, broadcasting: 3\nI0601 13:45:07.345564 1807 log.go:172] (0xc0006beb00) (0xc0005da1e0) Stream removed, broadcasting: 5\n" Jun 1 13:45:07.350: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 13:45:07.350: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 13:45:07.461: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 1 13:45:17.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2743 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 13:45:17.916: INFO: stderr: "I0601 13:45:17.825399 1827 log.go:172] (0xc000426420) (0xc0007a2640) Create stream\nI0601 13:45:17.825472 1827 log.go:172] (0xc000426420) (0xc0007a2640) Stream added, broadcasting: 1\nI0601 13:45:17.827604 1827 log.go:172] (0xc000426420) Reply frame received for 1\nI0601 13:45:17.827641 1827 log.go:172] (0xc000426420) (0xc000424000) Create stream\nI0601 13:45:17.827656 1827 log.go:172] (0xc000426420) (0xc000424000) Stream added, broadcasting: 3\nI0601 13:45:17.828627 1827 log.go:172] (0xc000426420) Reply frame received for 3\nI0601 13:45:17.828666 1827 log.go:172] (0xc000426420) (0xc0007a26e0) Create stream\nI0601 13:45:17.828675 1827 log.go:172] (0xc000426420) (0xc0007a26e0) Stream added, broadcasting: 5\nI0601 13:45:17.829880 1827 log.go:172] (0xc000426420) Reply frame received for 5\nI0601 13:45:17.909716 1827 log.go:172] (0xc000426420) Data frame received for 5\nI0601 13:45:17.909758 1827 log.go:172] (0xc0007a26e0) (5) Data frame handling\nI0601 13:45:17.909774 1827 log.go:172] (0xc0007a26e0) (5) Data frame sent\nI0601 13:45:17.909783 1827 log.go:172] (0xc000426420) Data frame received for 5\nI0601 13:45:17.909792 1827 log.go:172] (0xc0007a26e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 13:45:17.909810 1827 log.go:172] (0xc000426420) Data frame received for 3\nI0601 13:45:17.909895 1827 log.go:172] (0xc000424000) (3) Data frame handling\nI0601 13:45:17.909941 1827 log.go:172] (0xc000424000) (3) Data frame sent\nI0601 13:45:17.910010 1827 log.go:172] (0xc000426420) Data frame received for 3\nI0601 13:45:17.910030 1827 log.go:172] (0xc000424000) (3) Data frame handling\nI0601 13:45:17.911661 1827 log.go:172] (0xc000426420) Data frame received for 1\nI0601 13:45:17.911680 1827 log.go:172] (0xc0007a2640) (1) Data frame handling\nI0601 13:45:17.911695 1827 log.go:172] (0xc0007a2640) (1) Data frame sent\nI0601 13:45:17.911709 1827 log.go:172] (0xc000426420) (0xc0007a2640) Stream removed, broadcasting: 1\nI0601 13:45:17.911913 1827 log.go:172] (0xc000426420) Go away received\nI0601 13:45:17.912064 1827 log.go:172] (0xc000426420) (0xc0007a2640) Stream removed, broadcasting: 1\nI0601 13:45:17.912082 1827 log.go:172] (0xc000426420) (0xc000424000) Stream removed, broadcasting: 3\nI0601 13:45:17.912092 1827 log.go:172] (0xc000426420) (0xc0007a26e0) Stream removed, broadcasting: 5\n" Jun 1 13:45:17.916: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 13:45:17.916: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 13:45:27.935: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:45:27.935: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:27.935: INFO: Waiting for Pod statefulset-2743/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:27.935: INFO: Waiting for Pod statefulset-2743/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:37.942: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:45:37.942: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:37.942: INFO: Waiting for Pod statefulset-2743/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:47.945: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:45:47.945: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:45:57.942: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update Jun 1 13:45:57.942: INFO: Waiting for Pod statefulset-2743/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 1 13:46:08.098: INFO: Waiting for StatefulSet statefulset-2743/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 1 13:46:17.944: INFO: Deleting all statefulset in ns statefulset-2743 Jun 1 13:46:17.947: INFO: Scaling statefulset ss2 to 0 Jun 1 13:46:48.050: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 13:46:48.052: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:46:48.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2743" for this suite. Jun 1 13:46:58.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:46:58.222: INFO: namespace statefulset-2743 deletion completed in 10.122475591s • [SLOW TEST:193.329 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:46:58.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0601 13:47:11.532571 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 13:47:11.532: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:47:11.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5282" for this suite. Jun 1 13:47:23.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:47:24.093: INFO: namespace gc-5282 deletion completed in 12.537004473s • [SLOW TEST:25.871 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:47:24.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-801 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-801 STEP: Creating statefulset with conflicting port in namespace statefulset-801 STEP: Waiting until pod test-pod will start running in namespace statefulset-801 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-801 Jun 1 13:47:30.425: INFO: Observed stateful pod in namespace: statefulset-801, name: ss-0, uid: a66b086f-0648-4996-84b8-869364892b6e, status phase: Pending. Waiting for statefulset controller to delete. Jun 1 13:47:32.186: INFO: Observed stateful pod in namespace: statefulset-801, name: ss-0, uid: a66b086f-0648-4996-84b8-869364892b6e, status phase: Failed. Waiting for statefulset controller to delete. Jun 1 13:47:32.318: INFO: Observed stateful pod in namespace: statefulset-801, name: ss-0, uid: a66b086f-0648-4996-84b8-869364892b6e, status phase: Failed. Waiting for statefulset controller to delete. Jun 1 13:47:32.490: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-801 STEP: Removing pod with conflicting port in namespace statefulset-801 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-801 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 1 13:47:42.691: INFO: Deleting all statefulset in ns statefulset-801 Jun 1 13:47:42.694: INFO: Scaling statefulset ss to 0 Jun 1 13:47:52.739: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 13:47:52.742: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:47:52.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-801" for this suite. Jun 1 13:48:00.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:48:00.919: INFO: namespace statefulset-801 deletion completed in 8.111817311s • [SLOW TEST:36.826 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:48:00.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2912 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 1 13:48:01.115: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 1 13:48:31.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.60:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2912 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:48:31.399: INFO: >>> kubeConfig: /root/.kube/config I0601 13:48:31.434170 6 log.go:172] (0xc003152d10) (0xc002823b80) Create stream I0601 13:48:31.434207 6 log.go:172] (0xc003152d10) (0xc002823b80) Stream added, broadcasting: 1 I0601 13:48:31.436083 6 log.go:172] (0xc003152d10) Reply frame received for 1 I0601 13:48:31.436143 6 log.go:172] (0xc003152d10) (0xc001c96140) Create stream I0601 13:48:31.436175 6 log.go:172] (0xc003152d10) (0xc001c96140) Stream added, broadcasting: 3 I0601 13:48:31.437739 6 log.go:172] (0xc003152d10) Reply frame received for 3 I0601 13:48:31.437775 6 log.go:172] (0xc003152d10) (0xc001c96280) Create stream I0601 13:48:31.437787 6 log.go:172] (0xc003152d10) (0xc001c96280) Stream added, broadcasting: 5 I0601 13:48:31.438844 6 log.go:172] (0xc003152d10) Reply frame received for 5 I0601 13:48:31.612337 6 log.go:172] (0xc003152d10) Data frame received for 3 I0601 13:48:31.612360 6 log.go:172] (0xc001c96140) (3) Data frame handling I0601 13:48:31.612370 6 log.go:172] (0xc001c96140) (3) Data frame sent I0601 13:48:31.612377 6 log.go:172] (0xc003152d10) Data frame received for 3 I0601 13:48:31.612382 6 log.go:172] (0xc001c96140) (3) Data frame handling I0601 13:48:31.612958 6 log.go:172] (0xc003152d10) Data frame received for 5 I0601 13:48:31.612973 6 log.go:172] (0xc001c96280) (5) Data frame handling I0601 13:48:31.615242 6 log.go:172] (0xc003152d10) Data frame received for 1 I0601 13:48:31.615261 6 log.go:172] (0xc002823b80) (1) Data frame handling I0601 13:48:31.615472 6 log.go:172] (0xc002823b80) (1) Data frame sent I0601 13:48:31.615489 6 log.go:172] (0xc003152d10) (0xc002823b80) Stream removed, broadcasting: 1 I0601 13:48:31.615566 6 log.go:172] (0xc003152d10) Go away received I0601 13:48:31.615641 6 log.go:172] (0xc003152d10) (0xc002823b80) Stream removed, broadcasting: 1 I0601 13:48:31.615692 6 log.go:172] (0xc003152d10) (0xc001c96140) Stream removed, broadcasting: 3 I0601 13:48:31.615704 6 log.go:172] (0xc003152d10) (0xc001c96280) Stream removed, broadcasting: 5 Jun 1 13:48:31.615: INFO: Found all expected endpoints: [netserver-0] Jun 1 13:48:31.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.148:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2912 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:48:31.619: INFO: >>> kubeConfig: /root/.kube/config I0601 13:48:31.652534 6 log.go:172] (0xc003153340) (0xc002823e00) Create stream I0601 13:48:31.652560 6 log.go:172] (0xc003153340) (0xc002823e00) Stream added, broadcasting: 1 I0601 13:48:31.654551 6 log.go:172] (0xc003153340) Reply frame received for 1 I0601 13:48:31.654636 6 log.go:172] (0xc003153340) (0xc001c96500) Create stream I0601 13:48:31.654734 6 log.go:172] (0xc003153340) (0xc001c96500) Stream added, broadcasting: 3 I0601 13:48:31.655615 6 log.go:172] (0xc003153340) Reply frame received for 3 I0601 13:48:31.655665 6 log.go:172] (0xc003153340) (0xc0000ff860) Create stream I0601 13:48:31.655678 6 log.go:172] (0xc003153340) (0xc0000ff860) Stream added, broadcasting: 5 I0601 13:48:31.656571 6 log.go:172] (0xc003153340) Reply frame received for 5 I0601 13:48:31.722586 6 log.go:172] (0xc003153340) Data frame received for 5 I0601 13:48:31.722638 6 log.go:172] (0xc0000ff860) (5) Data frame handling I0601 13:48:31.722673 6 log.go:172] (0xc003153340) Data frame received for 3 I0601 13:48:31.722691 6 log.go:172] (0xc001c96500) (3) Data frame handling I0601 13:48:31.722716 6 log.go:172] (0xc001c96500) (3) Data frame sent I0601 13:48:31.722769 6 log.go:172] (0xc003153340) Data frame received for 3 I0601 13:48:31.722786 6 log.go:172] (0xc001c96500) (3) Data frame handling I0601 13:48:31.724119 6 log.go:172] (0xc003153340) Data frame received for 1 I0601 13:48:31.724147 6 log.go:172] (0xc002823e00) (1) Data frame handling I0601 13:48:31.724167 6 log.go:172] (0xc002823e00) (1) Data frame sent I0601 13:48:31.724184 6 log.go:172] (0xc003153340) (0xc002823e00) Stream removed, broadcasting: 1 I0601 13:48:31.724211 6 log.go:172] (0xc003153340) Go away received I0601 13:48:31.724296 6 log.go:172] (0xc003153340) (0xc002823e00) Stream removed, broadcasting: 1 I0601 13:48:31.724316 6 log.go:172] (0xc003153340) (0xc001c96500) Stream removed, broadcasting: 3 I0601 13:48:31.724336 6 log.go:172] (0xc003153340) (0xc0000ff860) Stream removed, broadcasting: 5 Jun 1 13:48:31.724: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:48:31.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2912" for this suite. Jun 1 13:48:57.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:48:57.822: INFO: namespace pod-network-test-2912 deletion completed in 26.09331396s • [SLOW TEST:56.902 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:48:57.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 1 13:49:04.624: INFO: Successfully updated pod "annotationupdate6f59a1ed-0688-4d66-a7e1-6c9ee66832b1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:49:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4462" for this suite. Jun 1 13:49:30.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:49:30.837: INFO: namespace projected-4462 deletion completed in 24.132441122s • [SLOW TEST:33.015 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:49:30.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 1 13:49:31.056: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 1 13:49:31.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:34.471: INFO: stderr: "" Jun 1 13:49:34.471: INFO: stdout: "service/redis-slave created\n" Jun 1 13:49:34.471: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 1 13:49:34.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:34.886: INFO: stderr: "" Jun 1 13:49:34.886: INFO: stdout: "service/redis-master created\n" Jun 1 13:49:34.886: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 1 13:49:34.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:35.286: INFO: stderr: "" Jun 1 13:49:35.286: INFO: stdout: "service/frontend created\n" Jun 1 13:49:35.286: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 1 13:49:35.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:35.670: INFO: stderr: "" Jun 1 13:49:35.670: INFO: stdout: "deployment.apps/frontend created\n" Jun 1 13:49:35.670: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 1 13:49:35.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:36.022: INFO: stderr: "" Jun 1 13:49:36.022: INFO: stdout: "deployment.apps/redis-master created\n" Jun 1 13:49:36.022: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 1 13:49:36.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8168' Jun 1 13:49:36.352: INFO: stderr: "" Jun 1 13:49:36.352: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 1 13:49:36.352: INFO: Waiting for all frontend pods to be Running. Jun 1 13:49:51.402: INFO: Waiting for frontend to serve content. Jun 1 13:49:51.448: INFO: Trying to add a new entry to the guestbook. Jun 1 13:49:51.530: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 1 13:49:51.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:51.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:51.798: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 1 13:49:51.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:52.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:52.007: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 1 13:49:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:52.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:52.218: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 1 13:49:52.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:52.381: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:52.381: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 1 13:49:52.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:52.479: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:52.479: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 1 13:49:52.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8168' Jun 1 13:49:52.719: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:49:52.719: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:49:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8168" for this suite. Jun 1 13:50:33.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:50:33.488: INFO: namespace kubectl-8168 deletion completed in 40.76527259s • [SLOW TEST:62.651 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:50:33.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-163c9486-8851-4a91-ab67-f951d5a7e00e STEP: Creating a pod to test consume configMaps Jun 1 13:50:33.678: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254" in namespace "projected-1460" to be "success or failure" Jun 1 13:50:33.714: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254": Phase="Pending", Reason="", readiness=false. Elapsed: 36.543946ms Jun 1 13:50:35.718: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040075062s Jun 1 13:50:37.937: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259044788s Jun 1 13:50:39.944: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254": Phase="Running", Reason="", readiness=true. Elapsed: 6.26644785s Jun 1 13:50:41.948: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.270187364s STEP: Saw pod success Jun 1 13:50:41.948: INFO: Pod "pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254" satisfied condition "success or failure" Jun 1 13:50:41.951: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254 container projected-configmap-volume-test: STEP: delete the pod Jun 1 13:50:42.012: INFO: Waiting for pod pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254 to disappear Jun 1 13:50:42.034: INFO: Pod pod-projected-configmaps-cbe02704-4265-4f91-99c8-fc1a4459b254 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:50:42.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1460" for this suite. Jun 1 13:50:48.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:50:48.296: INFO: namespace projected-1460 deletion completed in 6.258061621s • [SLOW TEST:14.808 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:50:48.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:50:48.453: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 1 13:50:53.456: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 1 13:50:55.463: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 1 13:50:57.527: INFO: Creating deployment "test-rollover-deployment" Jun 1 13:50:57.536: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 1 13:50:59.552: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 1 13:50:59.558: INFO: Ensure that both replica sets have 1 created replica Jun 1 13:50:59.563: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 1 13:50:59.568: INFO: Updating deployment test-rollover-deployment Jun 1 13:50:59.568: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 1 13:51:01.614: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 1 13:51:01.620: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 1 13:51:01.627: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:01.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616259, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:03.635: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:03.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616259, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:05.635: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:05.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616264, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:07.634: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:07.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616264, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:09.635: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:09.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616264, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:11.635: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:11.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616264, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:13.636: INFO: all replica sets need to contain the pod-template-hash label Jun 1 13:51:13.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616264, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616257, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:51:15.635: INFO: Jun 1 13:51:15.635: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 1 13:51:15.643: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-551,SelfLink:/apis/apps/v1/namespaces/deployment-551/deployments/test-rollover-deployment,UID:fdcd226b-780c-46d8-b0dc-6ffe1868c6e9,ResourceVersion:14090508,Generation:2,CreationTimestamp:2020-06-01 13:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-01 13:50:57 +0000 UTC 2020-06-01 13:50:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-01 13:51:14 +0000 UTC 2020-06-01 13:50:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 1 13:51:15.646: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-551,SelfLink:/apis/apps/v1/namespaces/deployment-551/replicasets/test-rollover-deployment-854595fc44,UID:1817b5f7-ddd0-468d-a3cf-35a348fa14eb,ResourceVersion:14090496,Generation:2,CreationTimestamp:2020-06-01 13:50:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fdcd226b-780c-46d8-b0dc-6ffe1868c6e9 0xc0031ab847 0xc0031ab848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 1 13:51:15.646: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 1 13:51:15.647: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-551,SelfLink:/apis/apps/v1/namespaces/deployment-551/replicasets/test-rollover-controller,UID:828d8de5-db82-470c-9a1f-e860961b9a14,ResourceVersion:14090505,Generation:2,CreationTimestamp:2020-06-01 13:50:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fdcd226b-780c-46d8-b0dc-6ffe1868c6e9 0xc0031ab777 0xc0031ab778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 13:51:15.647: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-551,SelfLink:/apis/apps/v1/namespaces/deployment-551/replicasets/test-rollover-deployment-9b8b997cf,UID:03f26d13-74c1-4474-9814-0460071d360c,ResourceVersion:14090457,Generation:2,CreationTimestamp:2020-06-01 13:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fdcd226b-780c-46d8-b0dc-6ffe1868c6e9 0xc0031ab930 0xc0031ab931}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 13:51:15.650: INFO: Pod "test-rollover-deployment-854595fc44-9dgxk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-9dgxk,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-551,SelfLink:/api/v1/namespaces/deployment-551/pods/test-rollover-deployment-854595fc44-9dgxk,UID:273d35a8-2d82-4576-909e-da8c3da6d766,ResourceVersion:14090473,Generation:0,CreationTimestamp:2020-06-01 13:50:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 1817b5f7-ddd0-468d-a3cf-35a348fa14eb 0xc001a05647 0xc001a05648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hw5k9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hw5k9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hw5k9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a05800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a05820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:50:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:51:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:51:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:50:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.155,StartTime:2020-06-01 13:50:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-01 13:51:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://fe6ce2632dbbb78b032c1f0a6eff2af1109cde45d5088a271f84a08876164b74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:51:15.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-551" for this suite. Jun 1 13:51:23.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:51:23.905: INFO: namespace deployment-551 deletion completed in 8.250451342s • [SLOW TEST:35.609 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:51:23.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8321, will wait for the garbage collector to delete the pods Jun 1 13:51:30.116: INFO: Deleting Job.batch foo took: 5.323689ms Jun 1 13:51:30.317: INFO: Terminating Job.batch foo pods took: 200.398144ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:52:12.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8321" for this suite. Jun 1 13:52:18.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:52:18.426: INFO: namespace job-8321 deletion completed in 6.180621988s • [SLOW TEST:54.521 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:52:18.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0601 13:52:59.532796 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 13:52:59.532: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:52:59.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-783" for this suite. Jun 1 13:53:11.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:53:11.623: INFO: namespace gc-783 deletion completed in 12.087356041s • [SLOW TEST:53.197 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:53:11.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 1 13:53:17.139: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:53:17.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3440" for this suite. Jun 1 13:53:23.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:53:23.415: INFO: namespace container-runtime-3440 deletion completed in 6.194909825s • [SLOW TEST:11.791 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:53:23.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-56ee7696-05b5-4a27-bf2d-c9ba5da8fa54 STEP: Creating a pod to test consume configMaps Jun 1 13:53:23.734: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51" in namespace "projected-2572" to be "success or failure" Jun 1 13:53:23.736: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30458ms Jun 1 13:53:25.741: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007391591s Jun 1 13:53:27.963: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229191324s Jun 1 13:53:29.967: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51": Phase="Running", Reason="", readiness=true. Elapsed: 6.232933618s Jun 1 13:53:31.974: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239828625s STEP: Saw pod success Jun 1 13:53:31.974: INFO: Pod "pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51" satisfied condition "success or failure" Jun 1 13:53:31.977: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51 container projected-configmap-volume-test: STEP: delete the pod Jun 1 13:53:32.015: INFO: Waiting for pod pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51 to disappear Jun 1 13:53:32.020: INFO: Pod pod-projected-configmaps-fba9ec10-b75f-4035-9e9f-c50e64771f51 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:53:32.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2572" for this suite. Jun 1 13:53:38.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:53:38.138: INFO: namespace projected-2572 deletion completed in 6.114958538s • [SLOW TEST:14.723 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:53:38.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:53:38.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2" in namespace "downward-api-3527" to be "success or failure" Jun 1 13:53:38.463: INFO: Pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.327651ms Jun 1 13:53:40.467: INFO: Pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040899627s Jun 1 13:53:42.471: INFO: Pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045699854s Jun 1 13:53:44.531: INFO: Pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104979608s STEP: Saw pod success Jun 1 13:53:44.531: INFO: Pod "downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2" satisfied condition "success or failure" Jun 1 13:53:44.534: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2 container client-container: STEP: delete the pod Jun 1 13:53:44.577: INFO: Waiting for pod downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2 to disappear Jun 1 13:53:44.618: INFO: Pod downwardapi-volume-02d8e440-944a-43e9-9f45-2c9d325c0ad2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:53:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3527" for this suite. Jun 1 13:53:50.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:53:50.824: INFO: namespace downward-api-3527 deletion completed in 6.202462805s • [SLOW TEST:12.685 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:53:50.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:53:50.922: INFO: Creating deployment "test-recreate-deployment" Jun 1 13:53:50.925: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 1 13:53:50.965: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 1 13:53:52.972: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 1 13:53:52.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616430, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:53:54.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616431, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616430, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:53:56.978: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 1 13:53:56.984: INFO: Updating deployment test-recreate-deployment Jun 1 13:53:56.984: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 1 13:53:58.395: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7431,SelfLink:/apis/apps/v1/namespaces/deployment-7431/deployments/test-recreate-deployment,UID:060b82f0-3053-4c43-b5a6-45e5d30983b3,ResourceVersion:14091206,Generation:2,CreationTimestamp:2020-06-01 13:53:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-01 13:53:57 +0000 UTC 2020-06-01 13:53:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-01 13:53:57 +0000 UTC 2020-06-01 13:53:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 1 13:53:58.706: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7431,SelfLink:/apis/apps/v1/namespaces/deployment-7431/replicasets/test-recreate-deployment-5c8c9cc69d,UID:afbc4d4b-294b-4be6-ab59-f7d0e08349b5,ResourceVersion:14091204,Generation:1,CreationTimestamp:2020-06-01 13:53:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 060b82f0-3053-4c43-b5a6-45e5d30983b3 0xc00310f8e7 0xc00310f8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 13:53:58.706: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 1 13:53:58.706: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7431,SelfLink:/apis/apps/v1/namespaces/deployment-7431/replicasets/test-recreate-deployment-6df85df6b9,UID:45a60f96-ef80-4aff-9138-8ca4a06219eb,ResourceVersion:14091192,Generation:2,CreationTimestamp:2020-06-01 13:53:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 060b82f0-3053-4c43-b5a6-45e5d30983b3 0xc00310f9b7 0xc00310f9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 13:53:58.709: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kpspl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kpspl,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7431,SelfLink:/api/v1/namespaces/deployment-7431/pods/test-recreate-deployment-5c8c9cc69d-kpspl,UID:602bd420-2607-4cf0-9db3-7f3c9f36f234,ResourceVersion:14091205,Generation:0,CreationTimestamp:2020-06-01 13:53:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d afbc4d4b-294b-4be6-ab59-f7d0e08349b5 0xc002bbec77 0xc002bbec78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j2jjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j2jjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j2jjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bbecf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bbed10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:53:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:53:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:53:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:53:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 13:53:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:53:58.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7431" for this suite. Jun 1 13:54:05.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:54:05.073: INFO: namespace deployment-7431 deletion completed in 6.360313584s • [SLOW TEST:14.249 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:54:05.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:54:05.528: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 1 13:54:08.149: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:54:09.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4708" for this suite. Jun 1 13:54:18.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:54:18.077: INFO: namespace replication-controller-4708 deletion completed in 8.817591864s • [SLOW TEST:13.003 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:54:18.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:54:18.256: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 1 13:54:18.355: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 1 13:54:23.367: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 1 13:54:23.367: INFO: Creating deployment "test-rolling-update-deployment" Jun 1 13:54:23.371: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 1 13:54:23.460: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 1 13:54:25.566: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 1 13:54:25.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:54:27.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616463, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 1 13:54:29.600: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 1 13:54:29.610: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3379,SelfLink:/apis/apps/v1/namespaces/deployment-3379/deployments/test-rolling-update-deployment,UID:b849dce0-9a9e-49ad-aaaf-74651309fc33,ResourceVersion:14091404,Generation:1,CreationTimestamp:2020-06-01 13:54:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-01 13:54:23 +0000 UTC 2020-06-01 13:54:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-01 13:54:28 +0000 UTC 2020-06-01 13:54:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 1 13:54:29.614: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3379,SelfLink:/apis/apps/v1/namespaces/deployment-3379/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:3c88a71a-badc-49cf-8b91-6982404accf9,ResourceVersion:14091393,Generation:1,CreationTimestamp:2020-06-01 13:54:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b849dce0-9a9e-49ad-aaaf-74651309fc33 0xc002505f17 0xc002505f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 1 13:54:29.614: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 1 13:54:29.614: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3379,SelfLink:/apis/apps/v1/namespaces/deployment-3379/replicasets/test-rolling-update-controller,UID:26e18051-c73c-45d9-8650-1039c7bdd34e,ResourceVersion:14091402,Generation:2,CreationTimestamp:2020-06-01 13:54:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b849dce0-9a9e-49ad-aaaf-74651309fc33 0xc002505e47 0xc002505e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 13:54:29.880: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-5f2q5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-5f2q5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3379,SelfLink:/api/v1/namespaces/deployment-3379/pods/test-rolling-update-deployment-79f6b9d75c-5f2q5,UID:688aa56e-6664-49fd-acb5-c8bb98c621d4,ResourceVersion:14091392,Generation:0,CreationTimestamp:2020-06-01 13:54:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 3c88a71a-badc-49cf-8b91-6982404accf9 0xc002c9a807 0xc002c9a808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ggmsg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ggmsg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ggmsg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c9a880} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c9a8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:54:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:54:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:54:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 13:54:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.75,StartTime:2020-06-01 13:54:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-01 13:54:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9ec29ea02ee3971055e5d5fe432f8df0f6fac7c9925a99ddbd5f8226cd10dc61}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:54:29.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3379" for this suite. Jun 1 13:54:37.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:54:38.288: INFO: namespace deployment-3379 deletion completed in 8.404008487s • [SLOW TEST:20.211 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:54:38.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038 Jun 1 13:54:38.776: INFO: Pod name my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038: Found 0 pods out of 1 Jun 1 13:54:43.886: INFO: Pod name my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038: Found 1 pods out of 1 Jun 1 13:54:43.886: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038" are running Jun 1 13:54:45.896: INFO: Pod "my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038-xwfqj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 13:54:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 13:54:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 13:54:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 13:54:38 +0000 UTC Reason: Message:}]) Jun 1 13:54:45.896: INFO: Trying to dial the pod Jun 1 13:54:50.930: INFO: Controller my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038: Got expected result from replica 1 [my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038-xwfqj]: "my-hostname-basic-47087bdb-6f71-4b25-9379-a77d0d542038-xwfqj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:54:50.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1597" for this suite. Jun 1 13:54:56.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:54:57.073: INFO: namespace replication-controller-1597 deletion completed in 6.138899135s • [SLOW TEST:18.784 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:54:57.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 1 13:54:57.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2848' Jun 1 13:54:57.450: INFO: stderr: "" Jun 1 13:54:57.450: INFO: stdout: "pod/pause created\n" Jun 1 13:54:57.450: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 1 13:54:57.450: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2848" to be "running and ready" Jun 1 13:54:57.487: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 37.166127ms Jun 1 13:54:59.491: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041364386s Jun 1 13:55:01.495: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044809761s Jun 1 13:55:03.499: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.049266739s Jun 1 13:55:03.499: INFO: Pod "pause" satisfied condition "running and ready" Jun 1 13:55:03.499: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 1 13:55:03.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2848' Jun 1 13:55:03.603: INFO: stderr: "" Jun 1 13:55:03.603: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 1 13:55:03.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2848' Jun 1 13:55:03.683: INFO: stderr: "" Jun 1 13:55:03.683: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 1 13:55:03.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2848' Jun 1 13:55:03.774: INFO: stderr: "" Jun 1 13:55:03.774: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 1 13:55:03.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2848' Jun 1 13:55:03.949: INFO: stderr: "" Jun 1 13:55:03.949: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 1 13:55:03.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2848' Jun 1 13:55:04.111: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 13:55:04.111: INFO: stdout: "pod \"pause\" force deleted\n" Jun 1 13:55:04.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2848' Jun 1 13:55:04.216: INFO: stderr: "No resources found.\n" Jun 1 13:55:04.216: INFO: stdout: "" Jun 1 13:55:04.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2848 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 13:55:04.308: INFO: stderr: "" Jun 1 13:55:04.308: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:55:04.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2848" for this suite. Jun 1 13:55:10.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:55:10.468: INFO: namespace kubectl-2848 deletion completed in 6.156091974s • [SLOW TEST:13.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:55:10.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9398 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 1 13:55:10.672: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 1 13:55:38.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.1.168&port=8081&tries=1'] Namespace:pod-network-test-9398 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:55:38.825: INFO: >>> kubeConfig: /root/.kube/config I0601 13:55:38.857641 6 log.go:172] (0xc001f5a420) (0xc002235ea0) Create stream I0601 13:55:38.857675 6 log.go:172] (0xc001f5a420) (0xc002235ea0) Stream added, broadcasting: 1 I0601 13:55:38.859600 6 log.go:172] (0xc001f5a420) Reply frame received for 1 I0601 13:55:38.859648 6 log.go:172] (0xc001f5a420) (0xc002360c80) Create stream I0601 13:55:38.859657 6 log.go:172] (0xc001f5a420) (0xc002360c80) Stream added, broadcasting: 3 I0601 13:55:38.860293 6 log.go:172] (0xc001f5a420) Reply frame received for 3 I0601 13:55:38.860323 6 log.go:172] (0xc001f5a420) (0xc0023b0c80) Create stream I0601 13:55:38.860333 6 log.go:172] (0xc001f5a420) (0xc0023b0c80) Stream added, broadcasting: 5 I0601 13:55:38.861066 6 log.go:172] (0xc001f5a420) Reply frame received for 5 I0601 13:55:38.946338 6 log.go:172] (0xc001f5a420) Data frame received for 3 I0601 13:55:38.946384 6 log.go:172] (0xc002360c80) (3) Data frame handling I0601 13:55:38.946424 6 log.go:172] (0xc002360c80) (3) Data frame sent I0601 13:55:38.947052 6 log.go:172] (0xc001f5a420) Data frame received for 3 I0601 13:55:38.947094 6 log.go:172] (0xc002360c80) (3) Data frame handling I0601 13:55:38.947114 6 log.go:172] (0xc001f5a420) Data frame received for 5 I0601 13:55:38.947122 6 log.go:172] (0xc0023b0c80) (5) Data frame handling I0601 13:55:38.948590 6 log.go:172] (0xc001f5a420) Data frame received for 1 I0601 13:55:38.948608 6 log.go:172] (0xc002235ea0) (1) Data frame handling I0601 13:55:38.948619 6 log.go:172] (0xc002235ea0) (1) Data frame sent I0601 13:55:38.948631 6 log.go:172] (0xc001f5a420) (0xc002235ea0) Stream removed, broadcasting: 1 I0601 13:55:38.948650 6 log.go:172] (0xc001f5a420) Go away received I0601 13:55:38.948752 6 log.go:172] (0xc001f5a420) (0xc002235ea0) Stream removed, broadcasting: 1 I0601 13:55:38.948779 6 log.go:172] (0xc001f5a420) (0xc002360c80) Stream removed, broadcasting: 3 I0601 13:55:38.948792 6 log.go:172] (0xc001f5a420) (0xc0023b0c80) Stream removed, broadcasting: 5 Jun 1 13:55:38.948: INFO: Waiting for endpoints: map[] Jun 1 13:55:38.951: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.2.77&port=8081&tries=1'] Namespace:pod-network-test-9398 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 13:55:38.952: INFO: >>> kubeConfig: /root/.kube/config I0601 13:55:38.981770 6 log.go:172] (0xc001f5ab00) (0xc000c0a280) Create stream I0601 13:55:38.981795 6 log.go:172] (0xc001f5ab00) (0xc000c0a280) Stream added, broadcasting: 1 I0601 13:55:38.983687 6 log.go:172] (0xc001f5ab00) Reply frame received for 1 I0601 13:55:38.983719 6 log.go:172] (0xc001f5ab00) (0xc0030a4e60) Create stream I0601 13:55:38.983740 6 log.go:172] (0xc001f5ab00) (0xc0030a4e60) Stream added, broadcasting: 3 I0601 13:55:38.984582 6 log.go:172] (0xc001f5ab00) Reply frame received for 3 I0601 13:55:38.984606 6 log.go:172] (0xc001f5ab00) (0xc000c0a460) Create stream I0601 13:55:38.984615 6 log.go:172] (0xc001f5ab00) (0xc000c0a460) Stream added, broadcasting: 5 I0601 13:55:38.986018 6 log.go:172] (0xc001f5ab00) Reply frame received for 5 I0601 13:55:39.052929 6 log.go:172] (0xc001f5ab00) Data frame received for 3 I0601 13:55:39.053022 6 log.go:172] (0xc0030a4e60) (3) Data frame handling I0601 13:55:39.053059 6 log.go:172] (0xc0030a4e60) (3) Data frame sent I0601 13:55:39.053468 6 log.go:172] (0xc001f5ab00) Data frame received for 5 I0601 13:55:39.053551 6 log.go:172] (0xc000c0a460) (5) Data frame handling I0601 13:55:39.053734 6 log.go:172] (0xc001f5ab00) Data frame received for 3 I0601 13:55:39.053769 6 log.go:172] (0xc0030a4e60) (3) Data frame handling I0601 13:55:39.055409 6 log.go:172] (0xc001f5ab00) Data frame received for 1 I0601 13:55:39.055438 6 log.go:172] (0xc000c0a280) (1) Data frame handling I0601 13:55:39.055458 6 log.go:172] (0xc000c0a280) (1) Data frame sent I0601 13:55:39.055472 6 log.go:172] (0xc001f5ab00) (0xc000c0a280) Stream removed, broadcasting: 1 I0601 13:55:39.055493 6 log.go:172] (0xc001f5ab00) Go away received I0601 13:55:39.055622 6 log.go:172] (0xc001f5ab00) (0xc000c0a280) Stream removed, broadcasting: 1 I0601 13:55:39.055660 6 log.go:172] (0xc001f5ab00) (0xc0030a4e60) Stream removed, broadcasting: 3 I0601 13:55:39.055696 6 log.go:172] (0xc001f5ab00) (0xc000c0a460) Stream removed, broadcasting: 5 Jun 1 13:55:39.055: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:55:39.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9398" for this suite. Jun 1 13:56:05.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:56:05.271: INFO: namespace pod-network-test-9398 deletion completed in 26.192988763s • [SLOW TEST:54.803 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:56:05.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 1 13:56:05.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 1 13:56:05.774: INFO: stderr: "" Jun 1 13:56:05.774: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:56:05.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3878" for this suite. Jun 1 13:56:11.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:56:11.939: INFO: namespace kubectl-3878 deletion completed in 6.161069298s • [SLOW TEST:6.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:56:11.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:56:12.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64" in namespace "downward-api-6665" to be "success or failure" Jun 1 13:56:12.224: INFO: Pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64": Phase="Pending", Reason="", readiness=false. Elapsed: 29.448982ms Jun 1 13:56:14.295: INFO: Pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101223213s Jun 1 13:56:16.300: INFO: Pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105372866s Jun 1 13:56:18.304: INFO: Pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109352117s STEP: Saw pod success Jun 1 13:56:18.304: INFO: Pod "downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64" satisfied condition "success or failure" Jun 1 13:56:18.307: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64 container client-container: STEP: delete the pod Jun 1 13:56:18.336: INFO: Waiting for pod downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64 to disappear Jun 1 13:56:18.401: INFO: Pod downwardapi-volume-95264843-613d-4b01-b7f1-95d78a45bc64 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:56:18.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6665" for this suite. Jun 1 13:56:24.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:56:24.538: INFO: namespace downward-api-6665 deletion completed in 6.125575759s • [SLOW TEST:12.598 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:56:24.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0601 13:56:34.730463 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 13:56:34.730: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:56:34.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6737" for this suite. Jun 1 13:56:42.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:56:42.948: INFO: namespace gc-6737 deletion completed in 8.214342134s • [SLOW TEST:18.410 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:56:42.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-797537f5-b607-444f-b4d6-3a3dbcccddc8 STEP: Creating a pod to test consume secrets Jun 1 13:56:43.167: INFO: Waiting up to 5m0s for pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133" in namespace "secrets-8302" to be "success or failure" Jun 1 13:56:43.178: INFO: Pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133": Phase="Pending", Reason="", readiness=false. Elapsed: 11.4083ms Jun 1 13:56:45.182: INFO: Pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015251051s Jun 1 13:56:47.186: INFO: Pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018833919s Jun 1 13:56:49.308: INFO: Pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141112082s STEP: Saw pod success Jun 1 13:56:49.308: INFO: Pod "pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133" satisfied condition "success or failure" Jun 1 13:56:49.315: INFO: Trying to get logs from node iruya-worker pod pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133 container secret-volume-test: STEP: delete the pod Jun 1 13:56:49.343: INFO: Waiting for pod pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133 to disappear Jun 1 13:56:49.528: INFO: Pod pod-secrets-22a88a75-546b-4c4a-ac08-407585e37133 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:56:49.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8302" for this suite. Jun 1 13:56:55.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:56:55.697: INFO: namespace secrets-8302 deletion completed in 6.16480959s • [SLOW TEST:12.748 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:56:55.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 1 13:56:55.883: INFO: Waiting up to 5m0s for pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168" in namespace "emptydir-4253" to be "success or failure" Jun 1 13:56:55.886: INFO: Pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168": Phase="Pending", Reason="", readiness=false. Elapsed: 3.570986ms Jun 1 13:56:57.923: INFO: Pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040263826s Jun 1 13:56:59.927: INFO: Pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044596602s Jun 1 13:57:02.019: INFO: Pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136512549s STEP: Saw pod success Jun 1 13:57:02.019: INFO: Pod "pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168" satisfied condition "success or failure" Jun 1 13:57:02.022: INFO: Trying to get logs from node iruya-worker2 pod pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168 container test-container: STEP: delete the pod Jun 1 13:57:02.101: INFO: Waiting for pod pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168 to disappear Jun 1 13:57:02.223: INFO: Pod pod-b8604a13-0d09-43c2-bcdb-a08fb4ab7168 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:57:02.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4253" for this suite. Jun 1 13:57:08.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:57:08.363: INFO: namespace emptydir-4253 deletion completed in 6.135591331s • [SLOW TEST:12.664 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:57:08.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 13:57:08.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9748' Jun 1 13:57:08.612: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 1 13:57:08.612: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 1 13:57:08.675: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 1 13:57:08.693: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 1 13:57:08.722: INFO: scanned /root for discovery docs: Jun 1 13:57:08.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9748' Jun 1 13:57:26.193: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 1 13:57:26.193: INFO: stdout: "Created e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9\nScaling up e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 1 13:57:26.193: INFO: stdout: "Created e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9\nScaling up e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 1 13:57:26.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9748' Jun 1 13:57:26.284: INFO: stderr: "" Jun 1 13:57:26.284: INFO: stdout: "e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9-c6qbv " Jun 1 13:57:26.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9-c6qbv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jun 1 13:57:26.367: INFO: stderr: "" Jun 1 13:57:26.367: INFO: stdout: "true" Jun 1 13:57:26.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9-c6qbv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jun 1 13:57:26.459: INFO: stderr: "" Jun 1 13:57:26.459: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 1 13:57:26.459: INFO: e2e-test-nginx-rc-1ee7d437f23d2cab9165c2b7c44148a9-c6qbv is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 1 13:57:26.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9748' Jun 1 13:57:26.628: INFO: stderr: "" Jun 1 13:57:26.628: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:57:26.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9748" for this suite. Jun 1 13:57:50.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:57:50.852: INFO: namespace kubectl-9748 deletion completed in 24.17185973s • [SLOW TEST:42.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:57:50.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cc4daff9-ee31-42c0-a6ff-3eb3f5adb74c STEP: Creating a pod to test consume configMaps Jun 1 13:57:51.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc" in namespace "configmap-2596" to be "success or failure" Jun 1 13:57:51.151: INFO: Pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.669737ms Jun 1 13:57:53.156: INFO: Pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019311986s Jun 1 13:57:55.191: INFO: Pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054279044s Jun 1 13:57:57.195: INFO: Pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059006271s STEP: Saw pod success Jun 1 13:57:57.196: INFO: Pod "pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc" satisfied condition "success or failure" Jun 1 13:57:57.198: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc container configmap-volume-test: STEP: delete the pod Jun 1 13:57:57.259: INFO: Waiting for pod pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc to disappear Jun 1 13:57:57.349: INFO: Pod pod-configmaps-e5b44415-798a-4b6a-a6eb-3c2281ba91cc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:57:57.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2596" for this suite. Jun 1 13:58:03.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:58:03.469: INFO: namespace configmap-2596 deletion completed in 6.11673466s • [SLOW TEST:12.617 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:58:03.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 1 13:58:03.728: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:58:15.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3293" for this suite. Jun 1 13:58:39.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:58:39.241: INFO: namespace init-container-3293 deletion completed in 24.112123641s • [SLOW TEST:35.772 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:58:39.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 13:58:39.351: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:58:43.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3108" for this suite. Jun 1 13:59:23.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:59:23.677: INFO: namespace pods-3108 deletion completed in 40.176925819s • [SLOW TEST:44.435 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:59:23.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 13:59:23.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760" in namespace "downward-api-2968" to be "success or failure" Jun 1 13:59:23.831: INFO: Pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760": Phase="Pending", Reason="", readiness=false. Elapsed: 34.846247ms Jun 1 13:59:25.847: INFO: Pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051473221s Jun 1 13:59:27.903: INFO: Pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10746433s Jun 1 13:59:29.908: INFO: Pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111552873s STEP: Saw pod success Jun 1 13:59:29.908: INFO: Pod "downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760" satisfied condition "success or failure" Jun 1 13:59:29.910: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760 container client-container: STEP: delete the pod Jun 1 13:59:29.933: INFO: Waiting for pod downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760 to disappear Jun 1 13:59:29.951: INFO: Pod downwardapi-volume-27fdb1c6-d745-4765-bd77-5ebdf27f3760 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:59:29.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2968" for this suite. Jun 1 13:59:35.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 13:59:36.032: INFO: namespace downward-api-2968 deletion completed in 6.07855576s • [SLOW TEST:12.355 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 13:59:36.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7182 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7182 STEP: Deleting pre-stop pod Jun 1 13:59:55.168: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 13:59:55.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7182" for this suite. Jun 1 14:00:33.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:00:33.339: INFO: namespace prestop-7182 deletion completed in 38.095452974s • [SLOW TEST:57.306 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:00:33.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:00:37.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3623" for this suite. Jun 1 14:00:43.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:00:43.731: INFO: namespace emptydir-wrapper-3623 deletion completed in 6.114774841s • [SLOW TEST:10.392 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:00:43.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7f466728-819a-4d38-a32c-c7ef5efbf5ac STEP: Creating a pod to test consume secrets Jun 1 14:00:43.811: INFO: Waiting up to 5m0s for pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1" in namespace "secrets-7208" to be "success or failure" Jun 1 14:00:43.815: INFO: Pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564218ms Jun 1 14:00:45.819: INFO: Pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007759931s Jun 1 14:00:47.823: INFO: Pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011885466s Jun 1 14:00:49.833: INFO: Pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021589409s STEP: Saw pod success Jun 1 14:00:49.833: INFO: Pod "pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1" satisfied condition "success or failure" Jun 1 14:00:49.835: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1 container secret-volume-test: STEP: delete the pod Jun 1 14:00:49.853: INFO: Waiting for pod pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1 to disappear Jun 1 14:00:49.857: INFO: Pod pod-secrets-0bd2c75a-df41-46db-b1d8-86e033cb09b1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:00:49.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7208" for this suite. Jun 1 14:00:55.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:00:55.952: INFO: namespace secrets-7208 deletion completed in 6.091811828s • [SLOW TEST:12.221 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:00:55.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 1 14:00:56.026: INFO: Waiting up to 5m0s for pod "client-containers-1c8e2194-850b-448f-bf4c-ebc190669800" in namespace "containers-8570" to be "success or failure" Jun 1 14:00:56.046: INFO: Pod "client-containers-1c8e2194-850b-448f-bf4c-ebc190669800": Phase="Pending", Reason="", readiness=false. Elapsed: 20.374057ms Jun 1 14:00:58.072: INFO: Pod "client-containers-1c8e2194-850b-448f-bf4c-ebc190669800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045988021s Jun 1 14:01:00.075: INFO: Pod "client-containers-1c8e2194-850b-448f-bf4c-ebc190669800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049577878s STEP: Saw pod success Jun 1 14:01:00.075: INFO: Pod "client-containers-1c8e2194-850b-448f-bf4c-ebc190669800" satisfied condition "success or failure" Jun 1 14:01:00.078: INFO: Trying to get logs from node iruya-worker pod client-containers-1c8e2194-850b-448f-bf4c-ebc190669800 container test-container: STEP: delete the pod Jun 1 14:01:00.103: INFO: Waiting for pod client-containers-1c8e2194-850b-448f-bf4c-ebc190669800 to disappear Jun 1 14:01:00.125: INFO: Pod client-containers-1c8e2194-850b-448f-bf4c-ebc190669800 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:00.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8570" for this suite. Jun 1 14:01:06.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:01:06.257: INFO: namespace containers-8570 deletion completed in 6.127229203s • [SLOW TEST:10.305 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:01:06.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-644ac33f-c34e-44d9-8b45-62ccc64e7b5f STEP: Creating a pod to test consume configMaps Jun 1 14:01:06.331: INFO: Waiting up to 5m0s for pod "pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d" in namespace "configmap-5723" to be "success or failure" Jun 1 14:01:06.335: INFO: Pod "pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299992ms Jun 1 14:01:08.340: INFO: Pod "pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008786462s Jun 1 14:01:10.422: INFO: Pod "pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091682873s STEP: Saw pod success Jun 1 14:01:10.422: INFO: Pod "pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d" satisfied condition "success or failure" Jun 1 14:01:10.426: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d container configmap-volume-test: STEP: delete the pod Jun 1 14:01:10.508: INFO: Waiting for pod pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d to disappear Jun 1 14:01:10.543: INFO: Pod pod-configmaps-991e5b28-6c7b-4cc2-a684-0e92d019574d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:10.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5723" for this suite. Jun 1 14:01:16.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:01:16.653: INFO: namespace configmap-5723 deletion completed in 6.105238611s • [SLOW TEST:10.396 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:01:16.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:01:16.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a" in namespace "downward-api-2748" to be "success or failure" Jun 1 14:01:16.753: INFO: Pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.713223ms Jun 1 14:01:18.762: INFO: Pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017557359s Jun 1 14:01:20.779: INFO: Pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a": Phase="Running", Reason="", readiness=true. Elapsed: 4.034224182s Jun 1 14:01:22.786: INFO: Pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041335615s STEP: Saw pod success Jun 1 14:01:22.786: INFO: Pod "downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a" satisfied condition "success or failure" Jun 1 14:01:22.798: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a container client-container: STEP: delete the pod Jun 1 14:01:22.812: INFO: Waiting for pod downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a to disappear Jun 1 14:01:22.816: INFO: Pod downwardapi-volume-25428517-7c78-4fea-bbf7-2f6eccfa284a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:22.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2748" for this suite. Jun 1 14:01:28.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:01:28.937: INFO: namespace downward-api-2748 deletion completed in 6.117842252s • [SLOW TEST:12.284 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:01:28.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 1 14:01:29.055: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 1 14:01:29.063: INFO: Waiting for terminating namespaces to be deleted... Jun 1 14:01:29.065: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 1 14:01:29.069: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.069: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 14:01:29.069: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.069: INFO: Container kindnet-cni ready: true, restart count 2 Jun 1 14:01:29.069: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 1 14:01:29.075: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.075: INFO: Container kindnet-cni ready: true, restart count 2 Jun 1 14:01:29.075: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.075: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 14:01:29.075: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.075: INFO: Container coredns ready: true, restart count 0 Jun 1 14:01:29.075: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 1 14:01:29.075: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1614705768bb8491], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:30.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8193" for this suite. Jun 1 14:01:36.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:01:36.229: INFO: namespace sched-pred-8193 deletion completed in 6.130848125s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.292 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:01:36.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 1 14:01:36.290: INFO: Waiting up to 5m0s for pod "client-containers-aace80a5-8350-4138-bea1-885968617836" in namespace "containers-3133" to be "success or failure" Jun 1 14:01:36.305: INFO: Pod "client-containers-aace80a5-8350-4138-bea1-885968617836": Phase="Pending", Reason="", readiness=false. Elapsed: 15.407576ms Jun 1 14:01:38.436: INFO: Pod "client-containers-aace80a5-8350-4138-bea1-885968617836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146630522s Jun 1 14:01:40.440: INFO: Pod "client-containers-aace80a5-8350-4138-bea1-885968617836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150635049s STEP: Saw pod success Jun 1 14:01:40.440: INFO: Pod "client-containers-aace80a5-8350-4138-bea1-885968617836" satisfied condition "success or failure" Jun 1 14:01:40.443: INFO: Trying to get logs from node iruya-worker2 pod client-containers-aace80a5-8350-4138-bea1-885968617836 container test-container: STEP: delete the pod Jun 1 14:01:40.473: INFO: Waiting for pod client-containers-aace80a5-8350-4138-bea1-885968617836 to disappear Jun 1 14:01:40.479: INFO: Pod client-containers-aace80a5-8350-4138-bea1-885968617836 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:40.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3133" for this suite. Jun 1 14:01:46.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:01:46.575: INFO: namespace containers-3133 deletion completed in 6.091801884s • [SLOW TEST:10.345 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:01:46.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 14:01:46.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7725' Jun 1 14:01:49.019: INFO: stderr: "" Jun 1 14:01:49.019: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 1 14:01:54.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7725 -o json' Jun 1 14:01:54.163: INFO: stderr: "" Jun 1 14:01:54.163: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-01T14:01:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7725\",\n \"resourceVersion\": \"14092952\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7725/pods/e2e-test-nginx-pod\",\n \"uid\": \"96f8f4d0-7e0e-430c-91a7-5549b9cdf3d9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kkg2d\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kkg2d\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kkg2d\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-01T14:01:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-01T14:01:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-01T14:01:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-01T14:01:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://1adb30286ff23f33985608c3136b530b1165917a7b5791ff8135ec7c013a719a\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-01T14:01:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.88\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-01T14:01:49Z\"\n }\n}\n" STEP: replace the image in the pod Jun 1 14:01:54.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7725' Jun 1 14:01:54.446: INFO: stderr: "" Jun 1 14:01:54.446: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 1 14:01:54.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7725' Jun 1 14:01:58.501: INFO: stderr: "" Jun 1 14:01:58.501: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:01:58.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7725" for this suite. Jun 1 14:02:04.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:02:04.635: INFO: namespace kubectl-7725 deletion completed in 6.130185037s • [SLOW TEST:18.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:02:04.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:02:04.775: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 1 14:02:04.790: INFO: Number of nodes with available pods: 0 Jun 1 14:02:04.790: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 1 14:02:04.871: INFO: Number of nodes with available pods: 0 Jun 1 14:02:04.871: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:05.875: INFO: Number of nodes with available pods: 0 Jun 1 14:02:05.875: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:06.875: INFO: Number of nodes with available pods: 0 Jun 1 14:02:06.875: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:07.876: INFO: Number of nodes with available pods: 1 Jun 1 14:02:07.876: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 1 14:02:07.910: INFO: Number of nodes with available pods: 1 Jun 1 14:02:07.910: INFO: Number of running nodes: 0, number of available pods: 1 Jun 1 14:02:08.915: INFO: Number of nodes with available pods: 0 Jun 1 14:02:08.915: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 1 14:02:08.936: INFO: Number of nodes with available pods: 0 Jun 1 14:02:08.936: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:09.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:09.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:10.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:10.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:11.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:11.942: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:12.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:12.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:13.940: INFO: Number of nodes with available pods: 0 Jun 1 14:02:13.940: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:14.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:14.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:15.942: INFO: Number of nodes with available pods: 0 Jun 1 14:02:15.942: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:16.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:16.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:17.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:17.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:18.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:18.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:19.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:19.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:20.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:20.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:21.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:21.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:22.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:22.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:23.940: INFO: Number of nodes with available pods: 0 Jun 1 14:02:23.940: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:24.941: INFO: Number of nodes with available pods: 0 Jun 1 14:02:24.941: INFO: Node iruya-worker is running more than one daemon pod Jun 1 14:02:25.940: INFO: Number of nodes with available pods: 1 Jun 1 14:02:25.940: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1795, will wait for the garbage collector to delete the pods Jun 1 14:02:26.005: INFO: Deleting DaemonSet.extensions daemon-set took: 6.955453ms Jun 1 14:02:26.306: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.28148ms Jun 1 14:02:32.209: INFO: Number of nodes with available pods: 0 Jun 1 14:02:32.209: INFO: Number of running nodes: 0, number of available pods: 0 Jun 1 14:02:32.212: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1795/daemonsets","resourceVersion":"14093099"},"items":null} Jun 1 14:02:32.214: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1795/pods","resourceVersion":"14093099"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:02:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1795" for this suite. Jun 1 14:02:38.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:02:38.350: INFO: namespace daemonsets-1795 deletion completed in 6.09646873s • [SLOW TEST:33.714 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:02:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 1 14:02:38.420: INFO: Waiting up to 5m0s for pod "downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64" in namespace "downward-api-5849" to be "success or failure" Jun 1 14:02:38.423: INFO: Pod "downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526454ms Jun 1 14:02:40.428: INFO: Pod "downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008274127s Jun 1 14:02:42.432: INFO: Pod "downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012007259s STEP: Saw pod success Jun 1 14:02:42.432: INFO: Pod "downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64" satisfied condition "success or failure" Jun 1 14:02:42.434: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64 container dapi-container: STEP: delete the pod Jun 1 14:02:42.466: INFO: Waiting for pod downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64 to disappear Jun 1 14:02:42.470: INFO: Pod downward-api-c5ed42f3-4735-44b7-b1fa-0d98b1a00b64 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:02:42.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5849" for this suite. Jun 1 14:02:48.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:02:48.621: INFO: namespace downward-api-5849 deletion completed in 6.14782406s • [SLOW TEST:10.271 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:02:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:03:48.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8930" for this suite. Jun 1 14:04:10.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:04:10.791: INFO: namespace container-probe-8930 deletion completed in 22.103248512s • [SLOW TEST:82.169 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:04:10.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:04:16.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3697" for this suite. Jun 1 14:04:22.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:04:22.482: INFO: namespace watch-3697 deletion completed in 6.184670375s • [SLOW TEST:11.691 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:04:22.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3080 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 1 14:04:22.565: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 1 14:04:46.661: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.181:8080/dial?request=hostName&protocol=http&host=10.244.2.92&port=8080&tries=1'] Namespace:pod-network-test-3080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 14:04:46.661: INFO: >>> kubeConfig: /root/.kube/config I0601 14:04:46.695607 6 log.go:172] (0xc0021624d0) (0xc002823180) Create stream I0601 14:04:46.695642 6 log.go:172] (0xc0021624d0) (0xc002823180) Stream added, broadcasting: 1 I0601 14:04:46.698112 6 log.go:172] (0xc0021624d0) Reply frame received for 1 I0601 14:04:46.698166 6 log.go:172] (0xc0021624d0) (0xc0031739a0) Create stream I0601 14:04:46.698182 6 log.go:172] (0xc0021624d0) (0xc0031739a0) Stream added, broadcasting: 3 I0601 14:04:46.699284 6 log.go:172] (0xc0021624d0) Reply frame received for 3 I0601 14:04:46.699333 6 log.go:172] (0xc0021624d0) (0xc00275a780) Create stream I0601 14:04:46.699345 6 log.go:172] (0xc0021624d0) (0xc00275a780) Stream added, broadcasting: 5 I0601 14:04:46.700460 6 log.go:172] (0xc0021624d0) Reply frame received for 5 I0601 14:04:46.840241 6 log.go:172] (0xc0021624d0) Data frame received for 3 I0601 14:04:46.840261 6 log.go:172] (0xc0031739a0) (3) Data frame handling I0601 14:04:46.840271 6 log.go:172] (0xc0031739a0) (3) Data frame sent I0601 14:04:46.840776 6 log.go:172] (0xc0021624d0) Data frame received for 3 I0601 14:04:46.840792 6 log.go:172] (0xc0031739a0) (3) Data frame handling I0601 14:04:46.840818 6 log.go:172] (0xc0021624d0) Data frame received for 5 I0601 14:04:46.840845 6 log.go:172] (0xc00275a780) (5) Data frame handling I0601 14:04:46.842160 6 log.go:172] (0xc0021624d0) Data frame received for 1 I0601 14:04:46.842172 6 log.go:172] (0xc002823180) (1) Data frame handling I0601 14:04:46.842186 6 log.go:172] (0xc002823180) (1) Data frame sent I0601 14:04:46.842197 6 log.go:172] (0xc0021624d0) (0xc002823180) Stream removed, broadcasting: 1 I0601 14:04:46.842207 6 log.go:172] (0xc0021624d0) Go away received I0601 14:04:46.842316 6 log.go:172] (0xc0021624d0) (0xc002823180) Stream removed, broadcasting: 1 I0601 14:04:46.842334 6 log.go:172] (0xc0021624d0) (0xc0031739a0) Stream removed, broadcasting: 3 I0601 14:04:46.842342 6 log.go:172] (0xc0021624d0) (0xc00275a780) Stream removed, broadcasting: 5 Jun 1 14:04:46.842: INFO: Waiting for endpoints: map[] Jun 1 14:04:46.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.181:8080/dial?request=hostName&protocol=http&host=10.244.1.180&port=8080&tries=1'] Namespace:pod-network-test-3080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 1 14:04:46.845: INFO: >>> kubeConfig: /root/.kube/config I0601 14:04:46.868417 6 log.go:172] (0xc00084c9a0) (0xc002235b80) Create stream I0601 14:04:46.868436 6 log.go:172] (0xc00084c9a0) (0xc002235b80) Stream added, broadcasting: 1 I0601 14:04:46.870456 6 log.go:172] (0xc00084c9a0) Reply frame received for 1 I0601 14:04:46.870507 6 log.go:172] (0xc00084c9a0) (0xc002235c20) Create stream I0601 14:04:46.870521 6 log.go:172] (0xc00084c9a0) (0xc002235c20) Stream added, broadcasting: 3 I0601 14:04:46.871303 6 log.go:172] (0xc00084c9a0) Reply frame received for 3 I0601 14:04:46.871329 6 log.go:172] (0xc00084c9a0) (0xc00275a820) Create stream I0601 14:04:46.871340 6 log.go:172] (0xc00084c9a0) (0xc00275a820) Stream added, broadcasting: 5 I0601 14:04:46.872174 6 log.go:172] (0xc00084c9a0) Reply frame received for 5 I0601 14:04:46.938471 6 log.go:172] (0xc00084c9a0) Data frame received for 3 I0601 14:04:46.938502 6 log.go:172] (0xc002235c20) (3) Data frame handling I0601 14:04:46.938519 6 log.go:172] (0xc002235c20) (3) Data frame sent I0601 14:04:46.939320 6 log.go:172] (0xc00084c9a0) Data frame received for 3 I0601 14:04:46.939341 6 log.go:172] (0xc002235c20) (3) Data frame handling I0601 14:04:46.939368 6 log.go:172] (0xc00084c9a0) Data frame received for 5 I0601 14:04:46.939398 6 log.go:172] (0xc00275a820) (5) Data frame handling I0601 14:04:46.940732 6 log.go:172] (0xc00084c9a0) Data frame received for 1 I0601 14:04:46.940747 6 log.go:172] (0xc002235b80) (1) Data frame handling I0601 14:04:46.940760 6 log.go:172] (0xc002235b80) (1) Data frame sent I0601 14:04:46.940773 6 log.go:172] (0xc00084c9a0) (0xc002235b80) Stream removed, broadcasting: 1 I0601 14:04:46.940849 6 log.go:172] (0xc00084c9a0) (0xc002235b80) Stream removed, broadcasting: 1 I0601 14:04:46.940862 6 log.go:172] (0xc00084c9a0) (0xc002235c20) Stream removed, broadcasting: 3 I0601 14:04:46.940878 6 log.go:172] (0xc00084c9a0) (0xc00275a820) Stream removed, broadcasting: 5 Jun 1 14:04:46.940: INFO: Waiting for endpoints: map[] I0601 14:04:46.940927 6 log.go:172] (0xc00084c9a0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:04:46.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3080" for this suite. Jun 1 14:05:10.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:05:11.054: INFO: namespace pod-network-test-3080 deletion completed in 24.11017972s • [SLOW TEST:48.571 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:05:11.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 1 14:05:11.148: INFO: Waiting up to 5m0s for pod "pod-041f6425-2fb6-4c5a-9869-c61a370ec1be" in namespace "emptydir-9357" to be "success or failure" Jun 1 14:05:11.155: INFO: Pod "pod-041f6425-2fb6-4c5a-9869-c61a370ec1be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69015ms Jun 1 14:05:13.159: INFO: Pod "pod-041f6425-2fb6-4c5a-9869-c61a370ec1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011098983s Jun 1 14:05:15.164: INFO: Pod "pod-041f6425-2fb6-4c5a-9869-c61a370ec1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015634069s STEP: Saw pod success Jun 1 14:05:15.164: INFO: Pod "pod-041f6425-2fb6-4c5a-9869-c61a370ec1be" satisfied condition "success or failure" Jun 1 14:05:15.168: INFO: Trying to get logs from node iruya-worker2 pod pod-041f6425-2fb6-4c5a-9869-c61a370ec1be container test-container: STEP: delete the pod Jun 1 14:05:15.204: INFO: Waiting for pod pod-041f6425-2fb6-4c5a-9869-c61a370ec1be to disappear Jun 1 14:05:15.221: INFO: Pod pod-041f6425-2fb6-4c5a-9869-c61a370ec1be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:05:15.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9357" for this suite. Jun 1 14:05:21.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:05:21.316: INFO: namespace emptydir-9357 deletion completed in 6.09199007s • [SLOW TEST:10.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:05:21.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:05:25.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2819" for this suite. Jun 1 14:06:03.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:06:03.523: INFO: namespace kubelet-test-2819 deletion completed in 38.097084261s • [SLOW TEST:42.206 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:06:03.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b9f8febe-a2d0-4896-8b39-a291910ed474 STEP: Creating configMap with name cm-test-opt-upd-37533cba-9b4a-4374-89a6-86e87b127b74 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b9f8febe-a2d0-4896-8b39-a291910ed474 STEP: Updating configmap cm-test-opt-upd-37533cba-9b4a-4374-89a6-86e87b127b74 STEP: Creating configMap with name cm-test-opt-create-fa7813d4-bba0-4df9-98c2-6c7698a38bc3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:07:38.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2471" for this suite. Jun 1 14:08:00.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:08:00.372: INFO: namespace configmap-2471 deletion completed in 22.09485306s • [SLOW TEST:116.849 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:08:00.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 1 14:08:00.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4170' Jun 1 14:08:00.827: INFO: stderr: "" Jun 1 14:08:00.827: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 14:08:00.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:00.922: INFO: stderr: "" Jun 1 14:08:00.922: INFO: stdout: "update-demo-nautilus-nqvnr update-demo-nautilus-zmcb9 " Jun 1 14:08:00.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:01.041: INFO: stderr: "" Jun 1 14:08:01.041: INFO: stdout: "" Jun 1 14:08:01.041: INFO: update-demo-nautilus-nqvnr is created but not running Jun 1 14:08:06.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:06.146: INFO: stderr: "" Jun 1 14:08:06.146: INFO: stdout: "update-demo-nautilus-nqvnr update-demo-nautilus-zmcb9 " Jun 1 14:08:06.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:06.246: INFO: stderr: "" Jun 1 14:08:06.246: INFO: stdout: "true" Jun 1 14:08:06.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:06.335: INFO: stderr: "" Jun 1 14:08:06.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:08:06.335: INFO: validating pod update-demo-nautilus-nqvnr Jun 1 14:08:06.340: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:08:06.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:08:06.340: INFO: update-demo-nautilus-nqvnr is verified up and running Jun 1 14:08:06.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmcb9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:06.443: INFO: stderr: "" Jun 1 14:08:06.443: INFO: stdout: "true" Jun 1 14:08:06.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmcb9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:06.539: INFO: stderr: "" Jun 1 14:08:06.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:08:06.539: INFO: validating pod update-demo-nautilus-zmcb9 Jun 1 14:08:06.570: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:08:06.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:08:06.570: INFO: update-demo-nautilus-zmcb9 is verified up and running STEP: scaling down the replication controller Jun 1 14:08:06.573: INFO: scanned /root for discovery docs: Jun 1 14:08:06.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4170' Jun 1 14:08:07.698: INFO: stderr: "" Jun 1 14:08:07.698: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 14:08:07.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:07.816: INFO: stderr: "" Jun 1 14:08:07.816: INFO: stdout: "update-demo-nautilus-nqvnr update-demo-nautilus-zmcb9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 1 14:08:12.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:12.905: INFO: stderr: "" Jun 1 14:08:12.905: INFO: stdout: "update-demo-nautilus-nqvnr " Jun 1 14:08:12.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:13.000: INFO: stderr: "" Jun 1 14:08:13.000: INFO: stdout: "true" Jun 1 14:08:13.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:13.087: INFO: stderr: "" Jun 1 14:08:13.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:08:13.087: INFO: validating pod update-demo-nautilus-nqvnr Jun 1 14:08:13.090: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:08:13.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:08:13.090: INFO: update-demo-nautilus-nqvnr is verified up and running STEP: scaling up the replication controller Jun 1 14:08:13.092: INFO: scanned /root for discovery docs: Jun 1 14:08:13.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4170' Jun 1 14:08:14.230: INFO: stderr: "" Jun 1 14:08:14.230: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 14:08:14.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:14.319: INFO: stderr: "" Jun 1 14:08:14.319: INFO: stdout: "update-demo-nautilus-hr2ms update-demo-nautilus-nqvnr " Jun 1 14:08:14.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hr2ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:14.406: INFO: stderr: "" Jun 1 14:08:14.406: INFO: stdout: "" Jun 1 14:08:14.406: INFO: update-demo-nautilus-hr2ms is created but not running Jun 1 14:08:19.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4170' Jun 1 14:08:19.504: INFO: stderr: "" Jun 1 14:08:19.504: INFO: stdout: "update-demo-nautilus-hr2ms update-demo-nautilus-nqvnr " Jun 1 14:08:19.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hr2ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:19.589: INFO: stderr: "" Jun 1 14:08:19.589: INFO: stdout: "true" Jun 1 14:08:19.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hr2ms -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:19.699: INFO: stderr: "" Jun 1 14:08:19.699: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:08:19.699: INFO: validating pod update-demo-nautilus-hr2ms Jun 1 14:08:19.703: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:08:19.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:08:19.703: INFO: update-demo-nautilus-hr2ms is verified up and running Jun 1 14:08:19.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:19.793: INFO: stderr: "" Jun 1 14:08:19.793: INFO: stdout: "true" Jun 1 14:08:19.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqvnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4170' Jun 1 14:08:19.877: INFO: stderr: "" Jun 1 14:08:19.877: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:08:19.877: INFO: validating pod update-demo-nautilus-nqvnr Jun 1 14:08:19.880: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:08:19.880: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:08:19.880: INFO: update-demo-nautilus-nqvnr is verified up and running STEP: using delete to clean up resources Jun 1 14:08:19.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4170' Jun 1 14:08:19.976: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 1 14:08:19.976: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 1 14:08:19.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4170' Jun 1 14:08:20.078: INFO: stderr: "No resources found.\n" Jun 1 14:08:20.078: INFO: stdout: "" Jun 1 14:08:20.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4170 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 14:08:20.176: INFO: stderr: "" Jun 1 14:08:20.176: INFO: stdout: "update-demo-nautilus-hr2ms\nupdate-demo-nautilus-nqvnr\n" Jun 1 14:08:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4170' Jun 1 14:08:20.783: INFO: stderr: "No resources found.\n" Jun 1 14:08:20.783: INFO: stdout: "" Jun 1 14:08:20.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4170 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 1 14:08:20.882: INFO: stderr: "" Jun 1 14:08:20.882: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:08:20.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4170" for this suite. Jun 1 14:08:43.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:08:43.150: INFO: namespace kubectl-4170 deletion completed in 22.265175677s • [SLOW TEST:42.778 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:08:43.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-268b4b3f-8194-4140-97fd-80be4138001a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:08:49.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1609" for this suite. Jun 1 14:09:11.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:09:11.351: INFO: namespace configmap-1609 deletion completed in 22.091840295s • [SLOW TEST:28.200 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:09:11.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 1 14:09:11.388: INFO: Waiting up to 5m0s for pod "pod-e66e8421-3012-4d61-83a2-3ff9afee5180" in namespace "emptydir-5281" to be "success or failure" Jun 1 14:09:11.407: INFO: Pod "pod-e66e8421-3012-4d61-83a2-3ff9afee5180": Phase="Pending", Reason="", readiness=false. Elapsed: 19.352434ms Jun 1 14:09:13.411: INFO: Pod "pod-e66e8421-3012-4d61-83a2-3ff9afee5180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023447078s Jun 1 14:09:15.416: INFO: Pod "pod-e66e8421-3012-4d61-83a2-3ff9afee5180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027871203s STEP: Saw pod success Jun 1 14:09:15.416: INFO: Pod "pod-e66e8421-3012-4d61-83a2-3ff9afee5180" satisfied condition "success or failure" Jun 1 14:09:15.419: INFO: Trying to get logs from node iruya-worker pod pod-e66e8421-3012-4d61-83a2-3ff9afee5180 container test-container: STEP: delete the pod Jun 1 14:09:15.435: INFO: Waiting for pod pod-e66e8421-3012-4d61-83a2-3ff9afee5180 to disappear Jun 1 14:09:15.460: INFO: Pod pod-e66e8421-3012-4d61-83a2-3ff9afee5180 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:09:15.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5281" for this suite. Jun 1 14:09:21.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:09:21.571: INFO: namespace emptydir-5281 deletion completed in 6.107743867s • [SLOW TEST:10.220 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:09:21.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c9ec2e6c-f071-416f-bbe1-fce33b0deb85 STEP: Creating a pod to test consume configMaps Jun 1 14:09:21.664: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397" in namespace "projected-9597" to be "success or failure" Jun 1 14:09:21.667: INFO: Pod "pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423069ms Jun 1 14:09:23.719: INFO: Pod "pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054787413s Jun 1 14:09:25.723: INFO: Pod "pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058690161s STEP: Saw pod success Jun 1 14:09:25.723: INFO: Pod "pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397" satisfied condition "success or failure" Jun 1 14:09:25.726: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397 container projected-configmap-volume-test: STEP: delete the pod Jun 1 14:09:25.759: INFO: Waiting for pod pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397 to disappear Jun 1 14:09:25.771: INFO: Pod pod-projected-configmaps-3a097974-237d-4d90-b692-074d08498397 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:09:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9597" for this suite. Jun 1 14:09:31.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:09:31.860: INFO: namespace projected-9597 deletion completed in 6.086105907s • [SLOW TEST:10.288 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:09:31.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 1 14:09:35.990: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:09:36.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2066" for this suite. Jun 1 14:09:42.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:09:42.151: INFO: namespace container-runtime-2066 deletion completed in 6.08492909s • [SLOW TEST:10.290 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:09:42.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-30650ebf-5ace-4d8d-9bc7-d742b1ab3949 STEP: Creating configMap with name cm-test-opt-upd-c0d43314-0494-422f-825d-a6a1533a9293 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-30650ebf-5ace-4d8d-9bc7-d742b1ab3949 STEP: Updating configmap cm-test-opt-upd-c0d43314-0494-422f-825d-a6a1533a9293 STEP: Creating configMap with name cm-test-opt-create-e81e9dbc-d82a-4204-82c6-e31647828f39 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:09:52.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-589" for this suite. Jun 1 14:10:14.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:10:14.441: INFO: namespace projected-589 deletion completed in 22.086160805s • [SLOW TEST:32.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:10:14.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:10:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1221" for this suite. Jun 1 14:10:26.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:10:26.887: INFO: namespace namespaces-1221 deletion completed in 6.095306926s STEP: Destroying namespace "nsdeletetest-4940" for this suite. Jun 1 14:10:26.890: INFO: Namespace nsdeletetest-4940 was already deleted STEP: Destroying namespace "nsdeletetest-150" for this suite. Jun 1 14:10:32.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:10:33.027: INFO: namespace nsdeletetest-150 deletion completed in 6.137351618s • [SLOW TEST:18.586 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:10:33.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-vllz STEP: Creating a pod to test atomic-volume-subpath Jun 1 14:10:33.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vllz" in namespace "subpath-9433" to be "success or failure" Jun 1 14:10:33.165: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.106935ms Jun 1 14:10:35.170: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025796307s Jun 1 14:10:37.174: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 4.029856334s Jun 1 14:10:39.177: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 6.033272837s Jun 1 14:10:41.182: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 8.037897993s Jun 1 14:10:43.186: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 10.042272712s Jun 1 14:10:45.191: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 12.046923864s Jun 1 14:10:47.196: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 14.051632104s Jun 1 14:10:49.199: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 16.054660754s Jun 1 14:10:51.202: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 18.05847843s Jun 1 14:10:53.212: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 20.068231626s Jun 1 14:10:55.218: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Running", Reason="", readiness=true. Elapsed: 22.073723784s Jun 1 14:10:57.222: INFO: Pod "pod-subpath-test-downwardapi-vllz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.077978418s STEP: Saw pod success Jun 1 14:10:57.222: INFO: Pod "pod-subpath-test-downwardapi-vllz" satisfied condition "success or failure" Jun 1 14:10:57.224: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-vllz container test-container-subpath-downwardapi-vllz: STEP: delete the pod Jun 1 14:10:57.281: INFO: Waiting for pod pod-subpath-test-downwardapi-vllz to disappear Jun 1 14:10:57.306: INFO: Pod pod-subpath-test-downwardapi-vllz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vllz Jun 1 14:10:57.306: INFO: Deleting pod "pod-subpath-test-downwardapi-vllz" in namespace "subpath-9433" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:10:57.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9433" for this suite. Jun 1 14:11:03.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:11:03.422: INFO: namespace subpath-9433 deletion completed in 6.108770155s • [SLOW TEST:30.395 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:11:03.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:11:03.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246" in namespace "downward-api-2994" to be "success or failure" Jun 1 14:11:03.480: INFO: Pod "downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125128ms Jun 1 14:11:05.511: INFO: Pod "downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034882281s Jun 1 14:11:07.515: INFO: Pod "downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03878608s STEP: Saw pod success Jun 1 14:11:07.515: INFO: Pod "downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246" satisfied condition "success or failure" Jun 1 14:11:07.518: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246 container client-container: STEP: delete the pod Jun 1 14:11:07.560: INFO: Waiting for pod downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246 to disappear Jun 1 14:11:07.566: INFO: Pod downwardapi-volume-a5e014a7-374a-4c6d-8bd1-ca7bee0fe246 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:11:07.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2994" for this suite. Jun 1 14:11:13.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:11:13.657: INFO: namespace downward-api-2994 deletion completed in 6.087878618s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:11:13.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 1 14:11:13.762: INFO: Waiting up to 5m0s for pod "var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e" in namespace "var-expansion-2968" to be "success or failure" Jun 1 14:11:13.765: INFO: Pod "var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805663ms Jun 1 14:11:15.770: INFO: Pod "var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007825361s Jun 1 14:11:17.774: INFO: Pod "var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012020793s STEP: Saw pod success Jun 1 14:11:17.774: INFO: Pod "var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e" satisfied condition "success or failure" Jun 1 14:11:17.777: INFO: Trying to get logs from node iruya-worker pod var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e container dapi-container: STEP: delete the pod Jun 1 14:11:17.980: INFO: Waiting for pod var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e to disappear Jun 1 14:11:17.993: INFO: Pod var-expansion-84914e44-60b8-4f9e-9707-d75a6500a29e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:11:17.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2968" for this suite. Jun 1 14:11:24.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:11:24.081: INFO: namespace var-expansion-2968 deletion completed in 6.084309692s • [SLOW TEST:10.424 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:11:24.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6c4a7c66-514f-4fd1-b16b-9a76f625c0ba STEP: Creating a pod to test consume secrets Jun 1 14:11:24.169: INFO: Waiting up to 5m0s for pod "pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b" in namespace "secrets-7003" to be "success or failure" Jun 1 14:11:24.173: INFO: Pod "pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601802ms Jun 1 14:11:26.177: INFO: Pod "pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008100409s Jun 1 14:11:28.182: INFO: Pod "pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012628963s STEP: Saw pod success Jun 1 14:11:28.182: INFO: Pod "pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b" satisfied condition "success or failure" Jun 1 14:11:28.185: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b container secret-volume-test: STEP: delete the pod Jun 1 14:11:28.224: INFO: Waiting for pod pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b to disappear Jun 1 14:11:28.239: INFO: Pod pod-secrets-c0424242-f4f5-4bdf-9dd2-9c32ce2aeb2b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:11:28.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7003" for this suite. Jun 1 14:11:34.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:11:34.333: INFO: namespace secrets-7003 deletion completed in 6.090264622s • [SLOW TEST:10.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:11:34.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:11:34.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79" in namespace "downward-api-5489" to be "success or failure" Jun 1 14:11:34.469: INFO: Pod "downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834298ms Jun 1 14:11:36.473: INFO: Pod "downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012837564s Jun 1 14:11:38.477: INFO: Pod "downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017324423s STEP: Saw pod success Jun 1 14:11:38.478: INFO: Pod "downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79" satisfied condition "success or failure" Jun 1 14:11:38.481: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79 container client-container: STEP: delete the pod Jun 1 14:11:38.521: INFO: Waiting for pod downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79 to disappear Jun 1 14:11:38.541: INFO: Pod downwardapi-volume-dfaaac96-04a4-47ad-b03c-06ab03818d79 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:11:38.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5489" for this suite. Jun 1 14:11:44.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:11:44.636: INFO: namespace downward-api-5489 deletion completed in 6.0912039s • [SLOW TEST:10.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:11:44.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 in namespace container-probe-374 Jun 1 14:11:48.779: INFO: Started pod liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 in namespace container-probe-374 STEP: checking the pod's current state and verifying that restartCount is present Jun 1 14:11:48.783: INFO: Initial restart count of pod liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is 0 Jun 1 14:12:08.832: INFO: Restart count of pod container-probe-374/liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is now 1 (20.049780695s elapsed) Jun 1 14:12:28.875: INFO: Restart count of pod container-probe-374/liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is now 2 (40.09286858s elapsed) Jun 1 14:12:48.919: INFO: Restart count of pod container-probe-374/liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is now 3 (1m0.136240309s elapsed) Jun 1 14:13:08.961: INFO: Restart count of pod container-probe-374/liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is now 4 (1m20.178562082s elapsed) Jun 1 14:14:13.103: INFO: Restart count of pod container-probe-374/liveness-cf2bf5d1-1535-4db8-b25a-6c5c1e4f7eb5 is now 5 (2m24.320075875s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:14:13.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-374" for this suite. Jun 1 14:14:19.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:14:19.255: INFO: namespace container-probe-374 deletion completed in 6.087823482s • [SLOW TEST:154.619 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:14:19.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 1 14:14:27.377: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:27.380: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:29.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:29.384: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:31.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:31.385: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:33.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:33.385: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:35.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:35.385: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:37.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:37.383: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:39.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:39.385: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:41.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:41.384: INFO: Pod pod-with-prestop-http-hook still exists Jun 1 14:14:43.381: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 1 14:14:43.385: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:14:43.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9690" for this suite. Jun 1 14:15:05.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:15:05.490: INFO: namespace container-lifecycle-hook-9690 deletion completed in 22.092004396s • [SLOW TEST:46.235 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:15:05.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 1 14:15:06.084: INFO: created pod pod-service-account-defaultsa Jun 1 14:15:06.084: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 1 14:15:06.090: INFO: created pod pod-service-account-mountsa Jun 1 14:15:06.090: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 1 14:15:06.108: INFO: created pod pod-service-account-nomountsa Jun 1 14:15:06.108: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 1 14:15:06.147: INFO: created pod pod-service-account-defaultsa-mountspec Jun 1 14:15:06.147: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 1 14:15:06.212: INFO: created pod pod-service-account-mountsa-mountspec Jun 1 14:15:06.212: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 1 14:15:06.235: INFO: created pod pod-service-account-nomountsa-mountspec Jun 1 14:15:06.235: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 1 14:15:06.279: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 1 14:15:06.279: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 1 14:15:06.335: INFO: created pod pod-service-account-mountsa-nomountspec Jun 1 14:15:06.335: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 1 14:15:06.357: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 1 14:15:06.358: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:15:06.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9840" for this suite. Jun 1 14:15:36.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:15:36.565: INFO: namespace svcaccounts-9840 deletion completed in 30.159317187s • [SLOW TEST:31.075 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:15:36.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:15:40.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1605" for this suite. Jun 1 14:15:46.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:15:46.755: INFO: namespace kubelet-test-1605 deletion completed in 6.087397653s • [SLOW TEST:10.190 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:15:46.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 1 14:15:46.851: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095594,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 1 14:15:46.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095595,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 1 14:15:46.851: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095596,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 1 14:15:56.909: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095617,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 1 14:15:56.910: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095618,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 1 14:15:56.910: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4982,SelfLink:/api/v1/namespaces/watch-4982/configmaps/e2e-watch-test-label-changed,UID:e683ab83-5c45-45d8-806f-5cfcd2dc9ead,ResourceVersion:14095619,Generation:0,CreationTimestamp:2020-06-01 14:15:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:15:56.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4982" for this suite. Jun 1 14:16:02.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:16:03.015: INFO: namespace watch-4982 deletion completed in 6.100180534s • [SLOW TEST:16.259 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:16:03.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0df10c41-3732-4dba-8415-d950c60975eb STEP: Creating a pod to test consume configMaps Jun 1 14:16:03.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba" in namespace "configmap-3461" to be "success or failure" Jun 1 14:16:03.136: INFO: Pod "pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241956ms Jun 1 14:16:05.140: INFO: Pod "pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008411779s Jun 1 14:16:07.144: INFO: Pod "pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012913648s STEP: Saw pod success Jun 1 14:16:07.145: INFO: Pod "pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba" satisfied condition "success or failure" Jun 1 14:16:07.148: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba container configmap-volume-test: STEP: delete the pod Jun 1 14:16:07.264: INFO: Waiting for pod pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba to disappear Jun 1 14:16:07.315: INFO: Pod pod-configmaps-5bb4f25f-d33d-40f9-9bae-f1747c61ddba no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:16:07.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3461" for this suite. Jun 1 14:16:13.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:16:13.438: INFO: namespace configmap-3461 deletion completed in 6.118813919s • [SLOW TEST:10.423 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:16:13.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 1 14:16:13.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2216' Jun 1 14:16:16.280: INFO: stderr: "" Jun 1 14:16:16.280: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 1 14:16:17.285: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:17.285: INFO: Found 0 / 1 Jun 1 14:16:18.285: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:18.285: INFO: Found 0 / 1 Jun 1 14:16:19.306: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:19.306: INFO: Found 0 / 1 Jun 1 14:16:20.299: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:20.299: INFO: Found 1 / 1 Jun 1 14:16:20.299: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 1 14:16:20.304: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:20.304: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 1 14:16:20.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-w5x68 --namespace=kubectl-2216 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 1 14:16:20.430: INFO: stderr: "" Jun 1 14:16:20.430: INFO: stdout: "pod/redis-master-w5x68 patched\n" STEP: checking annotations Jun 1 14:16:20.433: INFO: Selector matched 1 pods for map[app:redis] Jun 1 14:16:20.433: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:16:20.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2216" for this suite. Jun 1 14:16:42.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:16:42.563: INFO: namespace kubectl-2216 deletion completed in 22.126997382s • [SLOW TEST:29.124 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:16:42.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:16:42.647: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:16:43.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9771" for this suite. Jun 1 14:16:49.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:16:49.820: INFO: namespace custom-resource-definition-9771 deletion completed in 6.092129268s • [SLOW TEST:7.257 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:16:49.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 1 14:16:49.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-344' Jun 1 14:16:50.291: INFO: stderr: "" Jun 1 14:16:50.291: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 14:16:50.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-344' Jun 1 14:16:50.420: INFO: stderr: "" Jun 1 14:16:50.420: INFO: stdout: "update-demo-nautilus-lrsvd update-demo-nautilus-vzjg2 " Jun 1 14:16:50.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrsvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:16:50.527: INFO: stderr: "" Jun 1 14:16:50.527: INFO: stdout: "" Jun 1 14:16:50.527: INFO: update-demo-nautilus-lrsvd is created but not running Jun 1 14:16:55.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-344' Jun 1 14:16:55.629: INFO: stderr: "" Jun 1 14:16:55.629: INFO: stdout: "update-demo-nautilus-lrsvd update-demo-nautilus-vzjg2 " Jun 1 14:16:55.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrsvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:16:55.719: INFO: stderr: "" Jun 1 14:16:55.719: INFO: stdout: "true" Jun 1 14:16:55.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrsvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:16:55.812: INFO: stderr: "" Jun 1 14:16:55.812: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:16:55.812: INFO: validating pod update-demo-nautilus-lrsvd Jun 1 14:16:55.816: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:16:55.816: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:16:55.816: INFO: update-demo-nautilus-lrsvd is verified up and running Jun 1 14:16:55.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:16:55.937: INFO: stderr: "" Jun 1 14:16:55.937: INFO: stdout: "true" Jun 1 14:16:55.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:16:56.034: INFO: stderr: "" Jun 1 14:16:56.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 1 14:16:56.034: INFO: validating pod update-demo-nautilus-vzjg2 Jun 1 14:16:56.039: INFO: got data: { "image": "nautilus.jpg" } Jun 1 14:16:56.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 1 14:16:56.039: INFO: update-demo-nautilus-vzjg2 is verified up and running STEP: rolling-update to new replication controller Jun 1 14:16:56.042: INFO: scanned /root for discovery docs: Jun 1 14:16:56.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-344' Jun 1 14:17:18.641: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 1 14:17:18.641: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 1 14:17:18.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-344' Jun 1 14:17:18.736: INFO: stderr: "" Jun 1 14:17:18.736: INFO: stdout: "update-demo-kitten-mhpvl update-demo-kitten-rcgn7 " Jun 1 14:17:18.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mhpvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:17:18.825: INFO: stderr: "" Jun 1 14:17:18.825: INFO: stdout: "true" Jun 1 14:17:18.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mhpvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:17:18.914: INFO: stderr: "" Jun 1 14:17:18.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 1 14:17:18.914: INFO: validating pod update-demo-kitten-mhpvl Jun 1 14:17:18.926: INFO: got data: { "image": "kitten.jpg" } Jun 1 14:17:18.926: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 1 14:17:18.926: INFO: update-demo-kitten-mhpvl is verified up and running Jun 1 14:17:18.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rcgn7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:17:19.015: INFO: stderr: "" Jun 1 14:17:19.015: INFO: stdout: "true" Jun 1 14:17:19.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rcgn7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-344' Jun 1 14:17:19.103: INFO: stderr: "" Jun 1 14:17:19.103: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 1 14:17:19.103: INFO: validating pod update-demo-kitten-rcgn7 Jun 1 14:17:19.116: INFO: got data: { "image": "kitten.jpg" } Jun 1 14:17:19.116: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 1 14:17:19.116: INFO: update-demo-kitten-rcgn7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:17:19.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-344" for this suite. Jun 1 14:17:43.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:17:43.230: INFO: namespace kubectl-344 deletion completed in 24.111020919s • [SLOW TEST:53.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:17:43.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 1 14:17:43.371: INFO: Waiting up to 5m0s for pod "pod-d23907a3-bf94-4daa-8747-3cc63d2a2001" in namespace "emptydir-9786" to be "success or failure" Jun 1 14:17:43.399: INFO: Pod "pod-d23907a3-bf94-4daa-8747-3cc63d2a2001": Phase="Pending", Reason="", readiness=false. Elapsed: 27.797798ms Jun 1 14:17:45.403: INFO: Pod "pod-d23907a3-bf94-4daa-8747-3cc63d2a2001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032047669s Jun 1 14:17:47.407: INFO: Pod "pod-d23907a3-bf94-4daa-8747-3cc63d2a2001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035321094s STEP: Saw pod success Jun 1 14:17:47.407: INFO: Pod "pod-d23907a3-bf94-4daa-8747-3cc63d2a2001" satisfied condition "success or failure" Jun 1 14:17:47.409: INFO: Trying to get logs from node iruya-worker pod pod-d23907a3-bf94-4daa-8747-3cc63d2a2001 container test-container: STEP: delete the pod Jun 1 14:17:47.593: INFO: Waiting for pod pod-d23907a3-bf94-4daa-8747-3cc63d2a2001 to disappear Jun 1 14:17:47.645: INFO: Pod pod-d23907a3-bf94-4daa-8747-3cc63d2a2001 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:17:47.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9786" for this suite. Jun 1 14:17:53.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:17:53.773: INFO: namespace emptydir-9786 deletion completed in 6.125416114s • [SLOW TEST:10.542 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:17:53.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 1 14:17:53.827: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:17:59.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4929" for this suite. Jun 1 14:18:05.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:05.577: INFO: namespace init-container-4929 deletion completed in 6.110831167s • [SLOW TEST:11.803 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:05.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:18:05.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7" in namespace "projected-1370" to be "success or failure" Jun 1 14:18:05.690: INFO: Pod "downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.736436ms Jun 1 14:18:07.693: INFO: Pod "downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038831782s Jun 1 14:18:09.698: INFO: Pod "downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043626335s STEP: Saw pod success Jun 1 14:18:09.698: INFO: Pod "downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7" satisfied condition "success or failure" Jun 1 14:18:09.701: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7 container client-container: STEP: delete the pod Jun 1 14:18:09.745: INFO: Waiting for pod downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7 to disappear Jun 1 14:18:09.767: INFO: Pod downwardapi-volume-ca000ea7-35ca-40a4-bbcb-bbc1a95200c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:18:09.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1370" for this suite. Jun 1 14:18:15.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:15.880: INFO: namespace projected-1370 deletion completed in 6.110171912s • [SLOW TEST:10.302 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:15.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 1 14:18:15.950: INFO: Waiting up to 5m0s for pod "downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf" in namespace "downward-api-202" to be "success or failure" Jun 1 14:18:15.957: INFO: Pod "downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583646ms Jun 1 14:18:17.961: INFO: Pod "downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010371287s Jun 1 14:18:19.965: INFO: Pod "downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015023102s STEP: Saw pod success Jun 1 14:18:19.965: INFO: Pod "downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf" satisfied condition "success or failure" Jun 1 14:18:19.969: INFO: Trying to get logs from node iruya-worker2 pod downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf container dapi-container: STEP: delete the pod Jun 1 14:18:20.034: INFO: Waiting for pod downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf to disappear Jun 1 14:18:20.047: INFO: Pod downward-api-753f9627-6d82-42d5-b5ac-5145bb92ffcf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:18:20.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-202" for this suite. Jun 1 14:18:26.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:26.162: INFO: namespace downward-api-202 deletion completed in 6.111586506s • [SLOW TEST:10.282 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:26.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-894ccce7-076b-48ef-91e8-e29f49f416e5 STEP: Creating a pod to test consume configMaps Jun 1 14:18:26.259: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613" in namespace "projected-6852" to be "success or failure" Jun 1 14:18:26.263: INFO: Pod "pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835607ms Jun 1 14:18:28.267: INFO: Pod "pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008415334s Jun 1 14:18:30.272: INFO: Pod "pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012951649s STEP: Saw pod success Jun 1 14:18:30.272: INFO: Pod "pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613" satisfied condition "success or failure" Jun 1 14:18:30.275: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613 container projected-configmap-volume-test: STEP: delete the pod Jun 1 14:18:30.316: INFO: Waiting for pod pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613 to disappear Jun 1 14:18:30.323: INFO: Pod pod-projected-configmaps-0b875ddf-a7c9-4034-9ff9-85c503969613 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:18:30.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6852" for this suite. Jun 1 14:18:36.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:36.440: INFO: namespace projected-6852 deletion completed in 6.113792961s • [SLOW TEST:10.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:36.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 1 14:18:36.502: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 1 14:18:36.519: INFO: Waiting for terminating namespaces to be deleted... Jun 1 14:18:36.523: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 1 14:18:36.528: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.528: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 14:18:36.528: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.528: INFO: Container kindnet-cni ready: true, restart count 2 Jun 1 14:18:36.528: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 1 14:18:36.533: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.533: INFO: Container kindnet-cni ready: true, restart count 2 Jun 1 14:18:36.533: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.533: INFO: Container kube-proxy ready: true, restart count 0 Jun 1 14:18:36.533: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.533: INFO: Container coredns ready: true, restart count 0 Jun 1 14:18:36.533: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 1 14:18:36.533: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 1 14:18:36.638: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 1 14:18:36.638: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 1 14:18:36.638: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 1 14:18:36.638: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 1 14:18:36.638: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 1 14:18:36.638: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408.16147146a9da1690], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4068/filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408.161471472e176002], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408.161471477d9c9ba4], Reason = [Created], Message = [Created container filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408.161471478df0ab68], Reason = [Started], Message = [Started container filler-pod-0b9eeb02-1264-4bdf-a00d-c98f72447408] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d.16147146a84a934d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4068/filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d.16147146f8b0d654], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d.1614714751960b79], Reason = [Created], Message = [Created container filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d.16147147613b0a8f], Reason = [Started], Message = [Started container filler-pod-cc0fb495-0b2e-41ce-b99c-2f4f2417175d] STEP: Considering event: Type = [Warning], Name = [additional-pod.16147147993c4919], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:18:41.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4068" for this suite. Jun 1 14:18:47.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:48.048: INFO: namespace sched-pred-4068 deletion completed in 6.191210193s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.607 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:48.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:18:48.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746" in namespace "projected-2175" to be "success or failure" Jun 1 14:18:48.341: INFO: Pod "downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746": Phase="Pending", Reason="", readiness=false. Elapsed: 6.798892ms Jun 1 14:18:50.345: INFO: Pod "downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010796332s Jun 1 14:18:52.350: INFO: Pod "downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015632211s STEP: Saw pod success Jun 1 14:18:52.350: INFO: Pod "downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746" satisfied condition "success or failure" Jun 1 14:18:52.353: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746 container client-container: STEP: delete the pod Jun 1 14:18:52.373: INFO: Waiting for pod downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746 to disappear Jun 1 14:18:52.377: INFO: Pod downwardapi-volume-b5169c74-f392-4a0d-a54b-dc0c725c2746 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:18:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2175" for this suite. Jun 1 14:18:58.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:18:58.471: INFO: namespace projected-2175 deletion completed in 6.090680294s • [SLOW TEST:10.422 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:18:58.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:18:58.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7" in namespace "downward-api-3166" to be "success or failure" Jun 1 14:18:58.545: INFO: Pod "downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.982519ms Jun 1 14:19:00.614: INFO: Pod "downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077656773s Jun 1 14:19:02.704: INFO: Pod "downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167375947s STEP: Saw pod success Jun 1 14:19:02.704: INFO: Pod "downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7" satisfied condition "success or failure" Jun 1 14:19:02.707: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7 container client-container: STEP: delete the pod Jun 1 14:19:02.727: INFO: Waiting for pod downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7 to disappear Jun 1 14:19:02.731: INFO: Pod downwardapi-volume-fb685413-4234-47ae-91d0-090cbba4a8e7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:19:02.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3166" for this suite. Jun 1 14:19:08.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:19:08.888: INFO: namespace downward-api-3166 deletion completed in 6.154141001s • [SLOW TEST:10.416 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:19:08.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 1 14:19:08.937: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix087200441/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:19:08.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-138" for this suite. Jun 1 14:19:15.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:19:15.106: INFO: namespace kubectl-138 deletion completed in 6.084417977s • [SLOW TEST:6.218 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:19:15.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 1 14:19:23.226: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:23.249: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:25.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:25.253: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:27.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:27.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:29.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:29.253: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:31.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:31.253: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:33.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:33.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:35.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:35.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:37.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:37.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:39.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:39.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:41.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:41.254: INFO: Pod pod-with-prestop-exec-hook still exists Jun 1 14:19:43.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 1 14:19:43.253: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:19:43.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6530" for this suite. Jun 1 14:20:05.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:20:05.382: INFO: namespace container-lifecycle-hook-6530 deletion completed in 22.119609785s • [SLOW TEST:50.275 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:20:05.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:20:31.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4816" for this suite. Jun 1 14:20:37.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:20:37.751: INFO: namespace namespaces-4816 deletion completed in 6.118027497s STEP: Destroying namespace "nsdeletetest-449" for this suite. Jun 1 14:20:37.753: INFO: Namespace nsdeletetest-449 was already deleted STEP: Destroying namespace "nsdeletetest-7795" for this suite. Jun 1 14:20:43.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:20:43.876: INFO: namespace nsdeletetest-7795 deletion completed in 6.123092865s • [SLOW TEST:38.494 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:20:43.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 1 14:20:43.939: INFO: Waiting up to 5m0s for pod "pod-9f02c360-4ba9-485d-ba3c-e03f4c801914" in namespace "emptydir-2725" to be "success or failure" Jun 1 14:20:43.942: INFO: Pod "pod-9f02c360-4ba9-485d-ba3c-e03f4c801914": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344856ms Jun 1 14:20:45.946: INFO: Pod "pod-9f02c360-4ba9-485d-ba3c-e03f4c801914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007415104s Jun 1 14:20:47.950: INFO: Pod "pod-9f02c360-4ba9-485d-ba3c-e03f4c801914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011123111s STEP: Saw pod success Jun 1 14:20:47.950: INFO: Pod "pod-9f02c360-4ba9-485d-ba3c-e03f4c801914" satisfied condition "success or failure" Jun 1 14:20:47.953: INFO: Trying to get logs from node iruya-worker pod pod-9f02c360-4ba9-485d-ba3c-e03f4c801914 container test-container: STEP: delete the pod Jun 1 14:20:47.993: INFO: Waiting for pod pod-9f02c360-4ba9-485d-ba3c-e03f4c801914 to disappear Jun 1 14:20:48.003: INFO: Pod pod-9f02c360-4ba9-485d-ba3c-e03f4c801914 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:20:48.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2725" for this suite. Jun 1 14:20:54.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:20:54.165: INFO: namespace emptydir-2725 deletion completed in 6.160062327s • [SLOW TEST:10.288 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:20:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:20:54.290: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 1 14:20:59.295: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 1 14:20:59.295: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 1 14:20:59.320: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3590,SelfLink:/apis/apps/v1/namespaces/deployment-3590/deployments/test-cleanup-deployment,UID:0b4456e6-7c6d-44c7-adea-5442dc10d897,ResourceVersion:14096769,Generation:1,CreationTimestamp:2020-06-01 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 1 14:20:59.360: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3590,SelfLink:/apis/apps/v1/namespaces/deployment-3590/replicasets/test-cleanup-deployment-55bbcbc84c,UID:c27b95df-1235-407f-95dd-56b4d664bf6c,ResourceVersion:14096771,Generation:1,CreationTimestamp:2020-06-01 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0b4456e6-7c6d-44c7-adea-5442dc10d897 0xc00278a9f7 0xc00278a9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 14:20:59.360: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 1 14:20:59.360: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3590,SelfLink:/apis/apps/v1/namespaces/deployment-3590/replicasets/test-cleanup-controller,UID:8401accc-ac99-42f2-b186-fa0d04b02ad0,ResourceVersion:14096770,Generation:1,CreationTimestamp:2020-06-01 14:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0b4456e6-7c6d-44c7-adea-5442dc10d897 0xc00278a917 0xc00278a918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 1 14:20:59.405: INFO: Pod "test-cleanup-controller-x6npk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-x6npk,GenerateName:test-cleanup-controller-,Namespace:deployment-3590,SelfLink:/api/v1/namespaces/deployment-3590/pods/test-cleanup-controller-x6npk,UID:c22e6692-5d37-4836-9baf-97b161ec3aa1,ResourceVersion:14096761,Generation:0,CreationTimestamp:2020-06-01 14:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 8401accc-ac99-42f2-b186-fa0d04b02ad0 0xc00278b307 0xc00278b308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6c56r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6c56r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6c56r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00278b380} {node.kubernetes.io/unreachable Exists NoExecute 0xc00278b3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:20:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:20:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:20:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:20:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.211,StartTime:2020-06-01 14:20:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:20:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5386606f66a0706e8ab6b0398b1f2b329fff1939b2a6c08dab7d457ed76e5bde}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:20:59.405: INFO: Pod "test-cleanup-deployment-55bbcbc84c-mxrhm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-mxrhm,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3590,SelfLink:/api/v1/namespaces/deployment-3590/pods/test-cleanup-deployment-55bbcbc84c-mxrhm,UID:57da4e41-2ae8-4653-ac57-c26a02f0a381,ResourceVersion:14096776,Generation:0,CreationTimestamp:2020-06-01 14:20:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c c27b95df-1235-407f-95dd-56b4d664bf6c 0xc00278b497 0xc00278b498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6c56r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6c56r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-6c56r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00278b510} {node.kubernetes.io/unreachable Exists NoExecute 0xc00278b530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:20:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:20:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3590" for this suite. Jun 1 14:21:05.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:21:05.575: INFO: namespace deployment-3590 deletion completed in 6.151545816s • [SLOW TEST:11.409 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:21:05.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-1b28ad8c-90d8-49b8-9a49-0c404de0f6cf in namespace container-probe-2721 Jun 1 14:21:09.643: INFO: Started pod test-webserver-1b28ad8c-90d8-49b8-9a49-0c404de0f6cf in namespace container-probe-2721 STEP: checking the pod's current state and verifying that restartCount is present Jun 1 14:21:09.646: INFO: Initial restart count of pod test-webserver-1b28ad8c-90d8-49b8-9a49-0c404de0f6cf is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:25:10.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2721" for this suite. Jun 1 14:25:16.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:25:16.515: INFO: namespace container-probe-2721 deletion completed in 6.129776397s • [SLOW TEST:250.940 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:25:16.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 1 14:25:23.690: INFO: 0 pods remaining Jun 1 14:25:23.690: INFO: 0 pods has nil DeletionTimestamp Jun 1 14:25:23.690: INFO: STEP: Gathering metrics W0601 14:25:24.513482 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 14:25:24.513: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:25:24.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8744" for this suite. Jun 1 14:25:30.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:25:30.927: INFO: namespace gc-8744 deletion completed in 6.409978391s • [SLOW TEST:14.411 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:25:30.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5dcwh in namespace proxy-1177 I0601 14:25:31.157923 6 runners.go:180] Created replication controller with name: proxy-service-5dcwh, namespace: proxy-1177, replica count: 1 I0601 14:25:32.208354 6 runners.go:180] proxy-service-5dcwh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0601 14:25:33.208581 6 runners.go:180] proxy-service-5dcwh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0601 14:25:34.208831 6 runners.go:180] proxy-service-5dcwh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0601 14:25:35.209014 6 runners.go:180] proxy-service-5dcwh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 1 14:25:35.212: INFO: setup took 4.207537528s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 1 14:25:35.220: INFO: (0) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 7.983534ms) Jun 1 14:25:35.220: INFO: (0) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 8.181498ms) Jun 1 14:25:35.220: INFO: (0) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 8.263172ms) Jun 1 14:25:35.222: INFO: (0) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 10.243449ms) Jun 1 14:25:35.222: INFO: (0) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 10.214812ms) Jun 1 14:25:35.223: INFO: (0) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 10.638656ms) Jun 1 14:25:35.223: INFO: (0) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 10.485692ms) Jun 1 14:25:35.223: INFO: (0) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 10.579479ms) Jun 1 14:25:35.223: INFO: (0) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 10.520084ms) Jun 1 14:25:35.223: INFO: (0) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 10.485388ms) Jun 1 14:25:35.225: INFO: (0) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 13.22015ms) Jun 1 14:25:35.242: INFO: (0) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 29.495123ms) Jun 1 14:25:35.242: INFO: (0) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 29.427198ms) Jun 1 14:25:35.242: INFO: (0) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 4.603968ms) Jun 1 14:25:35.246: INFO: (1) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.668614ms) Jun 1 14:25:35.246: INFO: (1) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.675361ms) Jun 1 14:25:35.247: INFO: (1) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.694793ms) Jun 1 14:25:35.247: INFO: (1) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 5.157431ms) Jun 1 14:25:35.247: INFO: (1) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 5.259685ms) Jun 1 14:25:35.248: INFO: (1) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 5.813785ms) Jun 1 14:25:35.248: INFO: (1) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 6.121518ms) Jun 1 14:25:35.248: INFO: (1) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 5.996325ms) Jun 1 14:25:35.248: INFO: (1) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 6.134395ms) Jun 1 14:25:35.249: INFO: (1) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 7.175438ms) Jun 1 14:25:35.249: INFO: (1) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 7.274847ms) Jun 1 14:25:35.250: INFO: (1) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 4.364455ms) Jun 1 14:25:35.254: INFO: (2) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.479966ms) Jun 1 14:25:35.254: INFO: (2) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.632662ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 4.575953ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 5.151812ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 5.043304ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 5.100839ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 5.246125ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 5.258105ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 5.416695ms) Jun 1 14:25:35.255: INFO: (2) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 5.319601ms) Jun 1 14:25:35.256: INFO: (2) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 5.627834ms) Jun 1 14:25:35.256: INFO: (2) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 5.668326ms) Jun 1 14:25:35.256: INFO: (2) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 5.74146ms) Jun 1 14:25:35.256: INFO: (2) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.934439ms) Jun 1 14:25:35.256: INFO: (2) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 3.877439ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 3.949685ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.222408ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.309225ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.285347ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.288864ms) Jun 1 14:25:35.260: INFO: (3) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test (200; 4.644556ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.618751ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.691439ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.758809ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 4.743152ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.69065ms) Jun 1 14:25:35.267: INFO: (4) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.753999ms) Jun 1 14:25:35.268: INFO: (4) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.983529ms) Jun 1 14:25:35.268: INFO: (4) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.040401ms) Jun 1 14:25:35.268: INFO: (4) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 5.058623ms) Jun 1 14:25:35.272: INFO: (5) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 4.169282ms) Jun 1 14:25:35.272: INFO: (5) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.285116ms) Jun 1 14:25:35.273: INFO: (5) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 5.619627ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 5.682874ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.834019ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 5.853648ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 5.959283ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 5.901802ms) Jun 1 14:25:35.274: INFO: (5) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 5.860976ms) Jun 1 14:25:35.276: INFO: (6) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 1.802765ms) Jun 1 14:25:35.277: INFO: (6) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 3.124339ms) Jun 1 14:25:35.277: INFO: (6) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.133422ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.803099ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 3.778608ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.800358ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 3.956379ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.847506ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 3.981667ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.326242ms) Jun 1 14:25:35.278: INFO: (6) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.439664ms) Jun 1 14:25:35.279: INFO: (6) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.765469ms) Jun 1 14:25:35.279: INFO: (6) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.753153ms) Jun 1 14:25:35.279: INFO: (6) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.849581ms) Jun 1 14:25:35.279: INFO: (6) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.898884ms) Jun 1 14:25:35.281: INFO: (7) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 2.086377ms) Jun 1 14:25:35.282: INFO: (7) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 3.538002ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.048601ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.062015ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 4.040773ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.123305ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.187418ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.13471ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.290202ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.640293ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.7246ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.588535ms) Jun 1 14:25:35.283: INFO: (7) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.74843ms) Jun 1 14:25:35.284: INFO: (7) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.66695ms) Jun 1 14:25:35.287: INFO: (8) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.183538ms) Jun 1 14:25:35.287: INFO: (8) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 3.198826ms) Jun 1 14:25:35.287: INFO: (8) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 3.482631ms) Jun 1 14:25:35.288: INFO: (8) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.963036ms) Jun 1 14:25:35.288: INFO: (8) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.149484ms) Jun 1 14:25:35.288: INFO: (8) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.413071ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.918035ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.845868ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.853365ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.958164ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.77237ms) Jun 1 14:25:35.289: INFO: (8) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 5.024864ms) Jun 1 14:25:35.292: INFO: (9) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.442227ms) Jun 1 14:25:35.292: INFO: (9) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 3.65989ms) Jun 1 14:25:35.293: INFO: (9) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 3.715989ms) Jun 1 14:25:35.293: INFO: (9) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.05624ms) Jun 1 14:25:35.293: INFO: (9) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.372421ms) Jun 1 14:25:35.293: INFO: (9) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.299425ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.66536ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.663454ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.749386ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.763241ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.785977ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.814129ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test (200; 4.862077ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 4.946034ms) Jun 1 14:25:35.294: INFO: (9) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 5.232912ms) Jun 1 14:25:35.297: INFO: (10) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 3.108121ms) Jun 1 14:25:35.297: INFO: (10) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 3.250799ms) Jun 1 14:25:35.297: INFO: (10) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.194835ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 3.576975ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.754576ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 3.660653ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 3.709649ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.668699ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.735309ms) Jun 1 14:25:35.298: INFO: (10) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test (200; 2.631586ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 3.777024ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 3.75312ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.879196ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.820407ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 3.82001ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.827031ms) Jun 1 14:25:35.303: INFO: (11) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 3.905762ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.150466ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 5.202563ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 5.134572ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 5.137867ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 5.223016ms) Jun 1 14:25:35.305: INFO: (11) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 5.109634ms) Jun 1 14:25:35.307: INFO: (12) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 2.106965ms) Jun 1 14:25:35.308: INFO: (12) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.196975ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.253595ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 4.207754ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.22147ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.237471ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.71635ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.648659ms) Jun 1 14:25:35.309: INFO: (12) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.791211ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.81731ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.795417ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.813332ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.014123ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.847187ms) Jun 1 14:25:35.310: INFO: (12) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.952591ms) Jun 1 14:25:35.313: INFO: (13) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.203502ms) Jun 1 14:25:35.313: INFO: (13) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.356919ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.504167ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 4.519926ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 4.654309ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.593683ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.62924ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.648791ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.657944ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.716688ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.63352ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.755645ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.706503ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.712046ms) Jun 1 14:25:35.314: INFO: (13) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.735214ms) Jun 1 14:25:35.318: INFO: (14) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 3.10908ms) Jun 1 14:25:35.318: INFO: (14) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.235409ms) Jun 1 14:25:35.318: INFO: (14) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 3.529002ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.309137ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 4.239999ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.348121ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 4.289749ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.371699ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.319404ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.377809ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.386462ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.359623ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 4.784222ms) Jun 1 14:25:35.319: INFO: (14) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 2.213641ms) Jun 1 14:25:35.322: INFO: (15) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 8.871312ms) Jun 1 14:25:35.328: INFO: (15) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 8.952112ms) Jun 1 14:25:35.328: INFO: (15) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 8.909835ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 9.02682ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 8.992745ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 8.974729ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 9.041353ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 8.980396ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 9.037117ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 9.044105ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 9.113247ms) Jun 1 14:25:35.329: INFO: (15) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 9.173939ms) Jun 1 14:25:35.331: INFO: (16) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 2.296945ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 2.665915ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 3.034729ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 3.246754ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.083631ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 3.248745ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.147686ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 3.214534ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 3.163992ms) Jun 1 14:25:35.332: INFO: (16) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 3.332743ms) Jun 1 14:25:35.333: INFO: (16) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.193377ms) Jun 1 14:25:35.333: INFO: (16) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.346497ms) Jun 1 14:25:35.333: INFO: (16) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 4.351008ms) Jun 1 14:25:35.333: INFO: (16) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 4.406632ms) Jun 1 14:25:35.336: INFO: (17) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 2.504113ms) Jun 1 14:25:35.336: INFO: (17) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 2.617808ms) Jun 1 14:25:35.336: INFO: (17) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 2.45218ms) Jun 1 14:25:35.337: INFO: (17) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 3.953804ms) Jun 1 14:25:35.337: INFO: (17) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.884274ms) Jun 1 14:25:35.337: INFO: (17) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname1/proxy/: foo (200; 3.941327ms) Jun 1 14:25:35.337: INFO: (17) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 3.918167ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname1/proxy/: tls baz (200; 4.24828ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.219494ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 4.406067ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:460/proxy/: tls baz (200; 4.494641ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 4.466546ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 4.443273ms) Jun 1 14:25:35.338: INFO: (17) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 4.571549ms) Jun 1 14:25:35.341: INFO: (18) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 2.765548ms) Jun 1 14:25:35.341: INFO: (18) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 3.31282ms) Jun 1 14:25:35.342: INFO: (18) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:1080/proxy/: ... (200; 3.458555ms) Jun 1 14:25:35.342: INFO: (18) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:1080/proxy/: test<... (200; 3.56443ms) Jun 1 14:25:35.342: INFO: (18) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 3.609329ms) Jun 1 14:25:35.342: INFO: (18) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: ... (200; 2.444479ms) Jun 1 14:25:35.346: INFO: (19) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:443/proxy/: test<... (200; 5.137388ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/pods/https:proxy-service-5dcwh-2khl2:462/proxy/: tls qux (200; 4.994353ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:162/proxy/: bar (200; 4.966796ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/pods/http:proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 5.044217ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2:160/proxy/: foo (200; 5.000211ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname1/proxy/: foo (200; 5.088635ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/pods/proxy-service-5dcwh-2khl2/proxy/: test (200; 5.072173ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/services/proxy-service-5dcwh:portname2/proxy/: bar (200; 5.260295ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/services/https:proxy-service-5dcwh:tlsportname2/proxy/: tls qux (200; 5.463492ms) Jun 1 14:25:35.348: INFO: (19) /api/v1/namespaces/proxy-1177/services/http:proxy-service-5dcwh:portname2/proxy/: bar (200; 5.552811ms) STEP: deleting ReplicationController proxy-service-5dcwh in namespace proxy-1177, will wait for the garbage collector to delete the pods Jun 1 14:25:35.407: INFO: Deleting ReplicationController proxy-service-5dcwh took: 6.496087ms Jun 1 14:25:35.507: INFO: Terminating ReplicationController proxy-service-5dcwh pods took: 100.294368ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:25:42.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1177" for this suite. Jun 1 14:25:48.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:25:48.335: INFO: namespace proxy-1177 deletion completed in 6.122937765s • [SLOW TEST:17.408 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:25:48.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-619ff33f-bfc5-412b-8129-190011f49225 STEP: Creating a pod to test consume configMaps Jun 1 14:25:48.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f" in namespace "configmap-6923" to be "success or failure" Jun 1 14:25:48.421: INFO: Pod "pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.82846ms Jun 1 14:25:50.453: INFO: Pod "pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041857539s Jun 1 14:25:52.458: INFO: Pod "pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046827132s STEP: Saw pod success Jun 1 14:25:52.458: INFO: Pod "pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f" satisfied condition "success or failure" Jun 1 14:25:52.462: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f container configmap-volume-test: STEP: delete the pod Jun 1 14:25:52.513: INFO: Waiting for pod pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f to disappear Jun 1 14:25:52.522: INFO: Pod pod-configmaps-2cb60747-2d3b-4f01-9d1a-b14368bf9e3f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:25:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6923" for this suite. Jun 1 14:25:58.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:25:58.668: INFO: namespace configmap-6923 deletion completed in 6.141281581s • [SLOW TEST:10.332 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:25:58.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 1 14:25:58.756: INFO: Waiting up to 5m0s for pod "pod-9460a207-c859-4b7e-8519-afa0cbb73eb7" in namespace "emptydir-6259" to be "success or failure" Jun 1 14:25:58.758: INFO: Pod "pod-9460a207-c859-4b7e-8519-afa0cbb73eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591089ms Jun 1 14:26:00.763: INFO: Pod "pod-9460a207-c859-4b7e-8519-afa0cbb73eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006951346s Jun 1 14:26:02.767: INFO: Pod "pod-9460a207-c859-4b7e-8519-afa0cbb73eb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011493357s STEP: Saw pod success Jun 1 14:26:02.767: INFO: Pod "pod-9460a207-c859-4b7e-8519-afa0cbb73eb7" satisfied condition "success or failure" Jun 1 14:26:02.771: INFO: Trying to get logs from node iruya-worker2 pod pod-9460a207-c859-4b7e-8519-afa0cbb73eb7 container test-container: STEP: delete the pod Jun 1 14:26:02.790: INFO: Waiting for pod pod-9460a207-c859-4b7e-8519-afa0cbb73eb7 to disappear Jun 1 14:26:02.793: INFO: Pod pod-9460a207-c859-4b7e-8519-afa0cbb73eb7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:26:02.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6259" for this suite. Jun 1 14:26:08.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:26:08.892: INFO: namespace emptydir-6259 deletion completed in 6.095782038s • [SLOW TEST:10.224 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:26:08.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:26:30.970: INFO: Container started at 2020-06-01 14:26:11 +0000 UTC, pod became ready at 2020-06-01 14:26:30 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:26:30.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7217" for this suite. Jun 1 14:26:52.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:26:53.089: INFO: namespace container-probe-7217 deletion completed in 22.115177552s • [SLOW TEST:44.197 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:26:53.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 1 14:26:53.162: INFO: Waiting up to 5m0s for pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c" in namespace "downward-api-4648" to be "success or failure" Jun 1 14:26:53.167: INFO: Pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548319ms Jun 1 14:26:55.172: INFO: Pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009378782s Jun 1 14:26:57.176: INFO: Pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c": Phase="Running", Reason="", readiness=true. Elapsed: 4.013513344s Jun 1 14:26:59.188: INFO: Pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025706949s STEP: Saw pod success Jun 1 14:26:59.188: INFO: Pod "downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c" satisfied condition "success or failure" Jun 1 14:26:59.191: INFO: Trying to get logs from node iruya-worker pod downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c container dapi-container: STEP: delete the pod Jun 1 14:26:59.216: INFO: Waiting for pod downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c to disappear Jun 1 14:26:59.250: INFO: Pod downward-api-a82ee69d-97c1-4c08-8949-e0aa11f9809c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:26:59.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4648" for this suite. Jun 1 14:27:05.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:27:05.385: INFO: namespace downward-api-4648 deletion completed in 6.092765341s • [SLOW TEST:12.295 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:27:05.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d250928f-4341-4bfe-9b95-ef9bae92205e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d250928f-4341-4bfe-9b95-ef9bae92205e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:27:11.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4027" for this suite. Jun 1 14:27:33.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:27:33.659: INFO: namespace projected-4027 deletion completed in 22.090938506s • [SLOW TEST:28.274 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:27:33.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 1 14:27:38.766: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:27:39.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3545" for this suite. Jun 1 14:28:01.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:28:01.964: INFO: namespace replicaset-3545 deletion completed in 22.180072977s • [SLOW TEST:28.305 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:28:01.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 1 14:28:06.098: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 1 14:28:11.198: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:28:11.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7197" for this suite. Jun 1 14:28:17.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:28:17.304: INFO: namespace pods-7197 deletion completed in 6.096540009s • [SLOW TEST:15.339 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:28:17.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-nlbp STEP: Creating a pod to test atomic-volume-subpath Jun 1 14:28:17.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nlbp" in namespace "subpath-9119" to be "success or failure" Jun 1 14:28:17.399: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.817121ms Jun 1 14:28:19.404: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00824383s Jun 1 14:28:21.407: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 4.012030878s Jun 1 14:28:23.411: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 6.015711232s Jun 1 14:28:25.415: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 8.019877243s Jun 1 14:28:27.418: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 10.02281209s Jun 1 14:28:29.422: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 12.026666425s Jun 1 14:28:31.426: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 14.030844692s Jun 1 14:28:33.430: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 16.034628535s Jun 1 14:28:35.435: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 18.039628962s Jun 1 14:28:37.439: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 20.043604595s Jun 1 14:28:39.443: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Running", Reason="", readiness=true. Elapsed: 22.047841903s Jun 1 14:28:41.447: INFO: Pod "pod-subpath-test-configmap-nlbp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051759936s STEP: Saw pod success Jun 1 14:28:41.447: INFO: Pod "pod-subpath-test-configmap-nlbp" satisfied condition "success or failure" Jun 1 14:28:41.450: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-nlbp container test-container-subpath-configmap-nlbp: STEP: delete the pod Jun 1 14:28:41.475: INFO: Waiting for pod pod-subpath-test-configmap-nlbp to disappear Jun 1 14:28:41.497: INFO: Pod pod-subpath-test-configmap-nlbp no longer exists STEP: Deleting pod pod-subpath-test-configmap-nlbp Jun 1 14:28:41.498: INFO: Deleting pod "pod-subpath-test-configmap-nlbp" in namespace "subpath-9119" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:28:41.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9119" for this suite. Jun 1 14:28:47.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:28:47.733: INFO: namespace subpath-9119 deletion completed in 6.159823912s • [SLOW TEST:30.428 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:28:47.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4580/secret-test-d3e5888d-2aff-4c84-8951-ffdb66e52348 STEP: Creating a pod to test consume secrets Jun 1 14:28:47.865: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930" in namespace "secrets-4580" to be "success or failure" Jun 1 14:28:47.875: INFO: Pod "pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930": Phase="Pending", Reason="", readiness=false. Elapsed: 9.649445ms Jun 1 14:28:49.879: INFO: Pod "pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013396497s Jun 1 14:28:51.883: INFO: Pod "pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017518817s STEP: Saw pod success Jun 1 14:28:51.883: INFO: Pod "pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930" satisfied condition "success or failure" Jun 1 14:28:51.886: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930 container env-test: STEP: delete the pod Jun 1 14:28:51.928: INFO: Waiting for pod pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930 to disappear Jun 1 14:28:51.935: INFO: Pod pod-configmaps-4c280ff1-2c1a-4eb7-b6cc-d3a252d55930 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:28:51.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4580" for this suite. Jun 1 14:28:57.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:28:58.026: INFO: namespace secrets-4580 deletion completed in 6.088225499s • [SLOW TEST:10.292 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:28:58.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9832 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9832 to expose endpoints map[] Jun 1 14:28:58.155: INFO: successfully validated that service endpoint-test2 in namespace services-9832 exposes endpoints map[] (45.798043ms elapsed) STEP: Creating pod pod1 in namespace services-9832 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9832 to expose endpoints map[pod1:[80]] Jun 1 14:29:01.211: INFO: successfully validated that service endpoint-test2 in namespace services-9832 exposes endpoints map[pod1:[80]] (3.048784402s elapsed) STEP: Creating pod pod2 in namespace services-9832 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9832 to expose endpoints map[pod1:[80] pod2:[80]] Jun 1 14:29:05.303: INFO: successfully validated that service endpoint-test2 in namespace services-9832 exposes endpoints map[pod1:[80] pod2:[80]] (4.088148847s elapsed) STEP: Deleting pod pod1 in namespace services-9832 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9832 to expose endpoints map[pod2:[80]] Jun 1 14:29:06.346: INFO: successfully validated that service endpoint-test2 in namespace services-9832 exposes endpoints map[pod2:[80]] (1.037287964s elapsed) STEP: Deleting pod pod2 in namespace services-9832 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9832 to expose endpoints map[] Jun 1 14:29:07.356: INFO: successfully validated that service endpoint-test2 in namespace services-9832 exposes endpoints map[] (1.005745788s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:29:07.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9832" for this suite. Jun 1 14:29:13.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:29:13.603: INFO: namespace services-9832 deletion completed in 6.20782231s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.577 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:29:13.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 1 14:29:21.733: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:21.758: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:23.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:23.762: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:25.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:25.762: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:27.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:27.762: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:29.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:29.762: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:31.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:31.763: INFO: Pod pod-with-poststart-http-hook still exists Jun 1 14:29:33.758: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 1 14:29:33.762: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:29:33.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3796" for this suite. Jun 1 14:29:55.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:29:55.856: INFO: namespace container-lifecycle-hook-3796 deletion completed in 22.090345466s • [SLOW TEST:42.252 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:29:55.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 1 14:29:55.958: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8855" to be "success or failure" Jun 1 14:29:55.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884873ms Jun 1 14:29:57.988: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030564121s Jun 1 14:29:59.994: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035857641s Jun 1 14:30:01.998: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040518814s STEP: Saw pod success Jun 1 14:30:01.998: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 1 14:30:02.002: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 1 14:30:02.025: INFO: Waiting for pod pod-host-path-test to disappear Jun 1 14:30:02.030: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:30:02.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8855" for this suite. Jun 1 14:30:08.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:30:08.140: INFO: namespace hostpath-8855 deletion completed in 6.107182194s • [SLOW TEST:12.283 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:30:08.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 1 14:30:08.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf" in namespace "downward-api-9506" to be "success or failure" Jun 1 14:30:08.270: INFO: Pod "downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf": Phase="Pending", Reason="", readiness=false. Elapsed: 43.670371ms Jun 1 14:30:10.274: INFO: Pod "downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04755413s Jun 1 14:30:12.279: INFO: Pod "downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052346473s STEP: Saw pod success Jun 1 14:30:12.279: INFO: Pod "downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf" satisfied condition "success or failure" Jun 1 14:30:12.282: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf container client-container: STEP: delete the pod Jun 1 14:30:12.366: INFO: Waiting for pod downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf to disappear Jun 1 14:30:12.369: INFO: Pod downwardapi-volume-4ef2bc70-355b-441d-ae25-00fe2eca40bf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:30:12.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9506" for this suite. Jun 1 14:30:18.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:30:18.456: INFO: namespace downward-api-9506 deletion completed in 6.08324443s • [SLOW TEST:10.316 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:30:18.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 1 14:30:23.048: INFO: Successfully updated pod "annotationupdate421c505f-ae4e-4ab4-8ce2-14240e8b5a58" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:30:25.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7255" for this suite. Jun 1 14:30:47.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:30:47.172: INFO: namespace downward-api-7255 deletion completed in 22.08519373s • [SLOW TEST:28.716 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:30:47.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 14:30:47.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5390' Jun 1 14:30:49.746: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 1 14:30:49.746: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 1 14:30:49.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5390' Jun 1 14:30:49.860: INFO: stderr: "" Jun 1 14:30:49.860: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:30:49.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5390" for this suite. Jun 1 14:30:55.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:30:55.978: INFO: namespace kubectl-5390 deletion completed in 6.096328146s • [SLOW TEST:8.806 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:30:55.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-8cdd3ffa-d2e1-4701-9fa3-6d0cbcf1183f [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:30:56.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-739" for this suite. Jun 1 14:31:02.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:31:02.171: INFO: namespace configmap-739 deletion completed in 6.132805471s • [SLOW TEST:6.192 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:31:02.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-fb2r STEP: Creating a pod to test atomic-volume-subpath Jun 1 14:31:02.270: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fb2r" in namespace "subpath-4579" to be "success or failure" Jun 1 14:31:02.310: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Pending", Reason="", readiness=false. Elapsed: 40.082525ms Jun 1 14:31:04.314: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044040965s Jun 1 14:31:06.318: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 4.047819342s Jun 1 14:31:08.322: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 6.051677154s Jun 1 14:31:10.326: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.055358299s Jun 1 14:31:12.330: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.059900864s Jun 1 14:31:14.335: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 12.06482583s Jun 1 14:31:16.340: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 14.070164031s Jun 1 14:31:18.345: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 16.075116771s Jun 1 14:31:20.349: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 18.079218901s Jun 1 14:31:22.354: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 20.083661877s Jun 1 14:31:24.358: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 22.088103447s Jun 1 14:31:26.363: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Running", Reason="", readiness=true. Elapsed: 24.092803316s Jun 1 14:31:28.368: INFO: Pod "pod-subpath-test-secret-fb2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.097318388s STEP: Saw pod success Jun 1 14:31:28.368: INFO: Pod "pod-subpath-test-secret-fb2r" satisfied condition "success or failure" Jun 1 14:31:28.371: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-fb2r container test-container-subpath-secret-fb2r: STEP: delete the pod Jun 1 14:31:28.403: INFO: Waiting for pod pod-subpath-test-secret-fb2r to disappear Jun 1 14:31:28.414: INFO: Pod pod-subpath-test-secret-fb2r no longer exists STEP: Deleting pod pod-subpath-test-secret-fb2r Jun 1 14:31:28.414: INFO: Deleting pod "pod-subpath-test-secret-fb2r" in namespace "subpath-4579" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:31:28.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4579" for this suite. Jun 1 14:31:34.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:31:34.529: INFO: namespace subpath-4579 deletion completed in 6.110057701s • [SLOW TEST:32.356 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:31:34.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:31:34.572: INFO: Creating deployment "nginx-deployment" Jun 1 14:31:34.576: INFO: Waiting for observed generation 1 Jun 1 14:31:36.602: INFO: Waiting for all required pods to come up Jun 1 14:31:36.606: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 1 14:31:46.614: INFO: Waiting for deployment "nginx-deployment" to complete Jun 1 14:31:46.619: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 1 14:31:46.626: INFO: Updating deployment nginx-deployment Jun 1 14:31:46.626: INFO: Waiting for observed generation 2 Jun 1 14:31:48.648: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 1 14:31:48.651: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 1 14:31:48.721: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 1 14:31:48.728: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 1 14:31:48.728: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 1 14:31:48.730: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 1 14:31:48.734: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 1 14:31:48.734: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 1 14:31:48.739: INFO: Updating deployment nginx-deployment Jun 1 14:31:48.739: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 1 14:31:48.755: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 1 14:31:48.774: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 1 14:31:50.808: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2560,SelfLink:/apis/apps/v1/namespaces/deployment-2560/deployments/nginx-deployment,UID:2264afe1-b09c-45eb-929f-e6c0acab0a87,ResourceVersion:14099008,Generation:3,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-06-01 14:31:48 +0000 UTC 2020-06-01 14:31:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-01 14:31:49 +0000 UTC 2020-06-01 14:31:34 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 1 14:31:50.812: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2560,SelfLink:/apis/apps/v1/namespaces/deployment-2560/replicasets/nginx-deployment-55fb7cb77f,UID:7afc30ae-bda6-4e4e-9bfe-823fd19a08a4,ResourceVersion:14098991,Generation:3,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2264afe1-b09c-45eb-929f-e6c0acab0a87 0xc0028e27b7 0xc0028e27b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 1 14:31:50.812: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 1 14:31:50.812: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2560,SelfLink:/apis/apps/v1/namespaces/deployment-2560/replicasets/nginx-deployment-7b8c6f4498,UID:7da8a3fd-8e20-4334-9cbc-9d36f682272f,ResourceVersion:14099001,Generation:3,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2264afe1-b09c-45eb-929f-e6c0acab0a87 0xc0028e2887 0xc0028e2888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 1 14:31:50.863: INFO: Pod "nginx-deployment-55fb7cb77f-42mtj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-42mtj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-42mtj,UID:aab879cb-af34-4668-9e02-5b0900ed3975,ResourceVersion:14099018,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024ce5d7 0xc0024ce5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ce650} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024ce670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.863: INFO: Pod "nginx-deployment-55fb7cb77f-5sjvk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5sjvk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-5sjvk,UID:3526211b-e948-4fce-96ef-323d7d812d10,ResourceVersion:14098920,Generation:0,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024ce757 0xc0024ce758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ce7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024ce7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.863: INFO: Pod "nginx-deployment-55fb7cb77f-7lprc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7lprc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-7lprc,UID:5a8b5602-05ed-4d3d-bbba-81923c945ce0,ResourceVersion:14099027,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024ce8c7 0xc0024ce8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ce940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024ce960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-7qpjd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7qpjd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-7qpjd,UID:643a3907-b71e-414e-8b80-8e5b2c3eb355,ResourceVersion:14099010,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cea37 0xc0024cea38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ceab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024ceae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-h6ncd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h6ncd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-h6ncd,UID:65e4b08a-bfc9-497d-b58b-f7a4964bab16,ResourceVersion:14099015,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cebb7 0xc0024cebb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cec30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cec50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-kb68v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kb68v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-kb68v,UID:2cb83c54-331b-4df6-8d9d-9379a6c6a398,ResourceVersion:14099024,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024ced27 0xc0024ced28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ceda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-lfvbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lfvbs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-lfvbs,UID:854e132b-5f8d-4b9f-8e85-09570ca8cd2c,ResourceVersion:14099002,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cee97 0xc0024cee98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cef10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cef30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-lrj2k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lrj2k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-lrj2k,UID:7798fe6d-c1b2-48fb-b7e0-eff48e954f59,ResourceVersion:14098923,Generation:0,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf007 0xc0024cf008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cf080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cf0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.864: INFO: Pod "nginx-deployment-55fb7cb77f-nlcks" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nlcks,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-nlcks,UID:46276360-df87-4dc4-b04c-59e8d8982be7,ResourceVersion:14099053,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf177 0xc0024cf178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cf1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cf210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-55fb7cb77f-phqsr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phqsr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-phqsr,UID:4cc2f9eb-a775-4efd-ab18-ac93d77c50f2,ResourceVersion:14098908,Generation:0,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf2e7 0xc0024cf2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cf360} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cf380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-55fb7cb77f-qmwbt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qmwbt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-qmwbt,UID:4f40a440-b6ef-4da6-b9dd-3ff9f80006a0,ResourceVersion:14098927,Generation:0,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf457 0xc0024cf458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cf4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cf4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-55fb7cb77f-vq5xf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vq5xf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-vq5xf,UID:d5f58286-a5b3-4002-9c99-0591aaed9f0f,ResourceVersion:14098902,Generation:0,CreationTimestamp:2020-06-01 14:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf5c7 0xc0024cf5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cf640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cf660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-55fb7cb77f-zmw5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zmw5d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-55fb7cb77f-zmw5d,UID:1d256343-a343-49f6-a906-f809b5e19aa7,ResourceVersion:14098975,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7afc30ae-bda6-4e4e-9bfe-823fd19a08a4 0xc0024cf857 0xc0024cf858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cfbf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024cfc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-7b8c6f4498-2f4s2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2f4s2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-2f4s2,UID:8a16e40c-808f-4fb0-af4f-43efe623ee71,ResourceVersion:14098998,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c0c7 0xc001a7c0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c140} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.865: INFO: Pod "nginx-deployment-7b8c6f4498-5tzwn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5tzwn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-5tzwn,UID:c84b1427-4f9a-4aa7-aa3e-fb7c357865a4,ResourceVersion:14098996,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c247 0xc001a7c248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-6cgxh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6cgxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-6cgxh,UID:1fc7a592-d8b6-4699-a554-99cc3214bb2c,ResourceVersion:14099030,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c367 0xc001a7c368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-6mhl4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6mhl4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-6mhl4,UID:6066b0e4-fe9d-47b7-b22d-e15f5970473d,ResourceVersion:14098865,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c6f7 0xc001a7c6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.234,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2c6d98e025fe711f0af0d024bdcd703bf87bf64a7f686054c41ce8f2c13dbeaf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-8vpxl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vpxl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-8vpxl,UID:26cdf60b-b554-4e11-96cd-2da7cbb2113a,ResourceVersion:14098832,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c877 0xc001a7c878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.127,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4e14ae45100773eb010243d46ee44d34753bc5127765414e49079fa11081b132}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-c8rhd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c8rhd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-c8rhd,UID:4f46bc72-b718-4e59-968d-6160d6afc26b,ResourceVersion:14099057,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7c9e7 0xc001a7c9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7ca60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7ca80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-cntrj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cntrj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-cntrj,UID:072496d9-a4b3-497e-a116-d9818ec086e3,ResourceVersion:14098811,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7cb47 0xc001a7cb48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7cbc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7cbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.126,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://97e8b9242dc89fb4cff32b7a1a37d4ef61fcebea7e581da94ec4cc5565e97b95}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.866: INFO: Pod "nginx-deployment-7b8c6f4498-dv6nd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dv6nd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-dv6nd,UID:ce769ea9-58e2-4e22-ac09-87b7bdfcd6cc,ResourceVersion:14099036,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7ccb7 0xc001a7ccb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7cd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7cd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.867: INFO: Pod "nginx-deployment-7b8c6f4498-fp2wr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fp2wr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-fp2wr,UID:f4064a11-6c12-4d3b-ae5c-b6916789411f,ResourceVersion:14098846,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7ce27 0xc001a7ce28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7cea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7cec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.231,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://945690b49859bb61ff11fbaa83e4a6f85881314bc3f46936d424ac6797f3e0ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.867: INFO: Pod "nginx-deployment-7b8c6f4498-glbct" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-glbct,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-glbct,UID:c4ab9b62-3b9d-4760-b584-d4c5539864a8,ResourceVersion:14098825,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7cf97 0xc001a7cf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d010} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.230,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cc8c39d563f72682d8124e6ff73df08c1ea9db57dc29bcf587fa7f0ea1d27167}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.867: INFO: Pod "nginx-deployment-7b8c6f4498-kfbhk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kfbhk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-kfbhk,UID:a49d3414-eaa1-4cdf-99cd-14e2ee647059,ResourceVersion:14098859,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d107 0xc001a7d108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d180} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.129,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1b3c6e0eebdf521b8e1ff5fc920b01c85363230553bb3af62a8c71e1740b23dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.867: INFO: Pod "nginx-deployment-7b8c6f4498-m2ns4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m2ns4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-m2ns4,UID:25cae3d5-50c5-4d2f-a726-7c6c6831f7de,ResourceVersion:14099035,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d287 0xc001a7d288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d300} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.867: INFO: Pod "nginx-deployment-7b8c6f4498-nvzdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nvzdb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-nvzdb,UID:0aa6bdbc-48e9-42a3-8a24-fc7522af3275,ResourceVersion:14099051,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d3e7 0xc001a7d3e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.868: INFO: Pod "nginx-deployment-7b8c6f4498-p9lp4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p9lp4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-p9lp4,UID:4ac3a066-6b43-4081-9ca2-c575078cd89e,ResourceVersion:14098984,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d567 0xc001a7d568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.868: INFO: Pod "nginx-deployment-7b8c6f4498-pkxbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pkxbj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-pkxbj,UID:148edcd8-9ed3-4022-af12-2afeffedbaa2,ResourceVersion:14099020,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d6c7 0xc001a7d6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.868: INFO: Pod "nginx-deployment-7b8c6f4498-q8txt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q8txt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-q8txt,UID:32d5cf3e-3787-43c8-9e13-891240482c27,ResourceVersion:14099009,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d827 0xc001a7d828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7d8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7d8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.868: INFO: Pod "nginx-deployment-7b8c6f4498-r7zln" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7zln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-r7zln,UID:8332a79f-8c47-4631-b5d5-8b3a06701ee1,ResourceVersion:14098863,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7d987 0xc001a7d988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7da00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7da20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.232,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ed1a033b692e5139de2620f86b4e85b1c905a10c2379d076f5cad5ff205cf130}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.868: INFO: Pod "nginx-deployment-7b8c6f4498-vnwkt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vnwkt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-vnwkt,UID:2282b7d1-b1c3-4dd3-8776-1a0cf37ec52c,ResourceVersion:14098838,Generation:0,CreationTimestamp:2020-06-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7daf7 0xc001a7daf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7db70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7db90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.128,StartTime:2020-06-01 14:31:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-01 14:31:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7f19eb41e1b132059294d41f120a06339f42beaab8ae086115dd236eef6a6958}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.869: INFO: Pod "nginx-deployment-7b8c6f4498-zt77r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zt77r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-zt77r,UID:ca9bcce8-15d0-4c6c-b501-a8e86e2419bc,ResourceVersion:14099058,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7dc67 0xc001a7dc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7dce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7dd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 1 14:31:50.869: INFO: Pod "nginx-deployment-7b8c6f4498-zzsbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zzsbs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2560,SelfLink:/api/v1/namespaces/deployment-2560/pods/nginx-deployment-7b8c6f4498-zzsbs,UID:89d22542-b271-4aea-bf7c-93f0a26b32af,ResourceVersion:14099063,Generation:0,CreationTimestamp:2020-06-01 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7da8a3fd-8e20-4334-9cbc-9d36f682272f 0xc001a7ddc7 0xc001a7ddc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-27p8v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-27p8v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-27p8v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7de40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7de60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:31:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-01 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:31:50.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2560" for this suite. Jun 1 14:32:14.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:32:14.999: INFO: namespace deployment-2560 deletion completed in 24.126198354s • [SLOW TEST:40.469 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:32:14.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 1 14:32:15.375: INFO: Waiting up to 5m0s for pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2" in namespace "emptydir-8983" to be "success or failure" Jun 1 14:32:15.419: INFO: Pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.870345ms Jun 1 14:32:17.423: INFO: Pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047769758s Jun 1 14:32:19.426: INFO: Pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050972242s Jun 1 14:32:21.431: INFO: Pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055495675s STEP: Saw pod success Jun 1 14:32:21.431: INFO: Pod "pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2" satisfied condition "success or failure" Jun 1 14:32:21.434: INFO: Trying to get logs from node iruya-worker pod pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2 container test-container: STEP: delete the pod Jun 1 14:32:21.490: INFO: Waiting for pod pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2 to disappear Jun 1 14:32:21.494: INFO: Pod pod-1d72a0a6-511d-4b1b-9a9e-1c1aad8cfae2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:32:21.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8983" for this suite. Jun 1 14:32:27.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:32:27.583: INFO: namespace emptydir-8983 deletion completed in 6.085550201s • [SLOW TEST:12.584 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:32:27.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 1 14:32:27.696: INFO: Waiting up to 5m0s for pod "pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8" in namespace "emptydir-4927" to be "success or failure" Jun 1 14:32:27.713: INFO: Pod "pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.219358ms Jun 1 14:32:29.718: INFO: Pod "pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021289745s Jun 1 14:32:31.722: INFO: Pod "pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025411875s STEP: Saw pod success Jun 1 14:32:31.722: INFO: Pod "pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8" satisfied condition "success or failure" Jun 1 14:32:31.725: INFO: Trying to get logs from node iruya-worker pod pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8 container test-container: STEP: delete the pod Jun 1 14:32:31.759: INFO: Waiting for pod pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8 to disappear Jun 1 14:32:31.770: INFO: Pod pod-f10762ca-11c8-4a62-9c0e-8578c43ed3c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:32:31.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4927" for this suite. Jun 1 14:32:37.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:32:37.900: INFO: namespace emptydir-4927 deletion completed in 6.127182983s • [SLOW TEST:10.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:32:37.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0601 14:32:39.047028 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 1 14:32:39.047: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:32:39.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3030" for this suite. Jun 1 14:32:45.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:32:45.180: INFO: namespace gc-3030 deletion completed in 6.130489824s • [SLOW TEST:7.279 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:32:45.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 14:32:51.291: INFO: DNS probes using dns-test-e409cf03-e318-47e4-8f89-39c3cc945176 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 14:32:59.414: INFO: File jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local from pod dns-2911/dns-test-1e8cbbd1-372e-4856-b950-f4077f034240 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 1 14:32:59.414: INFO: Lookups using dns-2911/dns-test-1e8cbbd1-372e-4856-b950-f4077f034240 failed for: [jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local] Jun 1 14:33:04.424: INFO: File jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local from pod dns-2911/dns-test-1e8cbbd1-372e-4856-b950-f4077f034240 contains '' instead of 'bar.example.com.' Jun 1 14:33:04.424: INFO: Lookups using dns-2911/dns-test-1e8cbbd1-372e-4856-b950-f4077f034240 failed for: [jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local] Jun 1 14:33:09.423: INFO: DNS probes using dns-test-1e8cbbd1-372e-4856-b950-f4077f034240 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2911.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 1 14:33:15.918: INFO: File jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local from pod dns-2911/dns-test-ea2a5a3b-d5b6-489e-9086-8e920935b03f contains '' instead of '10.110.87.202' Jun 1 14:33:15.918: INFO: Lookups using dns-2911/dns-test-ea2a5a3b-d5b6-489e-9086-8e920935b03f failed for: [jessie_udp@dns-test-service-3.dns-2911.svc.cluster.local] Jun 1 14:33:20.928: INFO: DNS probes using dns-test-ea2a5a3b-d5b6-489e-9086-8e920935b03f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:33:21.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2911" for this suite. Jun 1 14:33:27.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:33:27.461: INFO: namespace dns-2911 deletion completed in 6.121055903s • [SLOW TEST:42.280 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:33:27.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 14:33:27.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-534' Jun 1 14:33:27.639: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 1 14:33:27.639: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 1 14:33:29.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-534' Jun 1 14:33:29.842: INFO: stderr: "" Jun 1 14:33:29.842: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:33:29.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-534" for this suite. Jun 1 14:33:51.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:33:51.961: INFO: namespace kubectl-534 deletion completed in 22.11550381s • [SLOW TEST:24.500 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:33:51.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 1 14:33:52.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5672' Jun 1 14:33:52.120: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 1 14:33:52.120: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 1 14:33:52.143: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rgq5q] Jun 1 14:33:52.143: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rgq5q" in namespace "kubectl-5672" to be "running and ready" Jun 1 14:33:52.173: INFO: Pod "e2e-test-nginx-rc-rgq5q": Phase="Pending", Reason="", readiness=false. Elapsed: 30.10926ms Jun 1 14:33:54.209: INFO: Pod "e2e-test-nginx-rc-rgq5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065212323s Jun 1 14:33:56.213: INFO: Pod "e2e-test-nginx-rc-rgq5q": Phase="Running", Reason="", readiness=true. Elapsed: 4.069181557s Jun 1 14:33:56.213: INFO: Pod "e2e-test-nginx-rc-rgq5q" satisfied condition "running and ready" Jun 1 14:33:56.213: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rgq5q] Jun 1 14:33:56.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5672' Jun 1 14:33:56.332: INFO: stderr: "" Jun 1 14:33:56.332: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 1 14:33:56.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5672' Jun 1 14:33:56.435: INFO: stderr: "" Jun 1 14:33:56.435: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:33:56.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5672" for this suite. Jun 1 14:34:02.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:34:02.572: INFO: namespace kubectl-5672 deletion completed in 6.13325844s • [SLOW TEST:10.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:34:02.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-838d9751-b5ea-402f-bcee-064df02546cf in namespace container-probe-9515 Jun 1 14:34:06.674: INFO: Started pod busybox-838d9751-b5ea-402f-bcee-064df02546cf in namespace container-probe-9515 STEP: checking the pod's current state and verifying that restartCount is present Jun 1 14:34:06.677: INFO: Initial restart count of pod busybox-838d9751-b5ea-402f-bcee-064df02546cf is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:38:07.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9515" for this suite. Jun 1 14:38:13.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:38:13.618: INFO: namespace container-probe-9515 deletion completed in 6.170866257s • [SLOW TEST:251.045 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:38:13.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 1 14:38:21.760: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:21.764: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:23.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:23.768: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:25.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:25.768: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:27.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:27.768: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:29.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:29.768: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:31.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:31.769: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:33.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:33.767: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:35.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:35.768: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:37.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:37.907: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:39.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:39.769: INFO: Pod pod-with-poststart-exec-hook still exists Jun 1 14:38:41.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 1 14:38:41.769: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:38:41.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3985" for this suite. Jun 1 14:39:03.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:39:03.857: INFO: namespace container-lifecycle-hook-3985 deletion completed in 22.08412583s • [SLOW TEST:50.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:39:03.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 1 14:39:03.944: INFO: Creating ReplicaSet my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe Jun 1 14:39:03.962: INFO: Pod name my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe: Found 0 pods out of 1 Jun 1 14:39:08.967: INFO: Pod name my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe: Found 1 pods out of 1 Jun 1 14:39:08.967: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe" is running Jun 1 14:39:10.975: INFO: Pod "my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe-xhstt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 14:39:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 14:39:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 14:39:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-01 14:39:03 +0000 UTC Reason: Message:}]) Jun 1 14:39:10.975: INFO: Trying to dial the pod Jun 1 14:39:15.994: INFO: Controller my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe: Got expected result from replica 1 [my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe-xhstt]: "my-hostname-basic-c23b0238-9ecd-4962-8b21-dc0be3b787fe-xhstt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:39:15.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2042" for this suite. Jun 1 14:39:22.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:39:22.090: INFO: namespace replicaset-2042 deletion completed in 6.091864306s • [SLOW TEST:18.232 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:39:22.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1316 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1316 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1316 Jun 1 14:39:22.160: INFO: Found 0 stateful pods, waiting for 1 Jun 1 14:39:32.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 1 14:39:32.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 14:39:32.452: INFO: stderr: "I0601 14:39:32.305579 3564 log.go:172] (0xc000ae2630) (0xc00035c960) Create stream\nI0601 14:39:32.305637 3564 log.go:172] (0xc000ae2630) (0xc00035c960) Stream added, broadcasting: 1\nI0601 14:39:32.309803 3564 log.go:172] (0xc000ae2630) Reply frame received for 1\nI0601 14:39:32.309858 3564 log.go:172] (0xc000ae2630) (0xc00035c000) Create stream\nI0601 14:39:32.309872 3564 log.go:172] (0xc000ae2630) (0xc00035c000) Stream added, broadcasting: 3\nI0601 14:39:32.310953 3564 log.go:172] (0xc000ae2630) Reply frame received for 3\nI0601 14:39:32.310982 3564 log.go:172] (0xc000ae2630) (0xc000512320) Create stream\nI0601 14:39:32.310991 3564 log.go:172] (0xc000ae2630) (0xc000512320) Stream added, broadcasting: 5\nI0601 14:39:32.311990 3564 log.go:172] (0xc000ae2630) Reply frame received for 5\nI0601 14:39:32.418715 3564 log.go:172] (0xc000ae2630) Data frame received for 5\nI0601 14:39:32.418751 3564 log.go:172] (0xc000512320) (5) Data frame handling\nI0601 14:39:32.418764 3564 log.go:172] (0xc000512320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 14:39:32.445059 3564 log.go:172] (0xc000ae2630) Data frame received for 3\nI0601 14:39:32.445090 3564 log.go:172] (0xc00035c000) (3) Data frame handling\nI0601 14:39:32.445106 3564 log.go:172] (0xc00035c000) (3) Data frame sent\nI0601 14:39:32.445260 3564 log.go:172] (0xc000ae2630) Data frame received for 3\nI0601 14:39:32.445275 3564 log.go:172] (0xc00035c000) (3) Data frame handling\nI0601 14:39:32.445455 3564 log.go:172] (0xc000ae2630) Data frame received for 5\nI0601 14:39:32.445488 3564 log.go:172] (0xc000512320) (5) Data frame handling\nI0601 14:39:32.446892 3564 log.go:172] (0xc000ae2630) Data frame received for 1\nI0601 14:39:32.446919 3564 log.go:172] (0xc00035c960) (1) Data frame handling\nI0601 14:39:32.446934 3564 log.go:172] (0xc00035c960) (1) Data frame sent\nI0601 14:39:32.446949 3564 log.go:172] (0xc000ae2630) (0xc00035c960) Stream removed, broadcasting: 1\nI0601 14:39:32.446971 3564 log.go:172] (0xc000ae2630) Go away received\nI0601 14:39:32.447320 3564 log.go:172] (0xc000ae2630) (0xc00035c960) Stream removed, broadcasting: 1\nI0601 14:39:32.447337 3564 log.go:172] (0xc000ae2630) (0xc00035c000) Stream removed, broadcasting: 3\nI0601 14:39:32.447345 3564 log.go:172] (0xc000ae2630) (0xc000512320) Stream removed, broadcasting: 5\n" Jun 1 14:39:32.452: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 14:39:32.452: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 14:39:32.455: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 1 14:39:42.461: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 1 14:39:42.461: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 14:39:42.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999542s Jun 1 14:39:43.488: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986594554s Jun 1 14:39:44.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982196686s Jun 1 14:39:45.497: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977364072s Jun 1 14:39:46.502: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972946966s Jun 1 14:39:47.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.96797153s Jun 1 14:39:48.511: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963854426s Jun 1 14:39:49.516: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.958633088s Jun 1 14:39:50.521: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.95368786s Jun 1 14:39:51.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.799175ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1316 Jun 1 14:39:52.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 14:39:52.766: INFO: stderr: "I0601 14:39:52.669792 3586 log.go:172] (0xc000a964d0) (0xc000660960) Create stream\nI0601 14:39:52.669873 3586 log.go:172] (0xc000a964d0) (0xc000660960) Stream added, broadcasting: 1\nI0601 14:39:52.672629 3586 log.go:172] (0xc000a964d0) Reply frame received for 1\nI0601 14:39:52.672680 3586 log.go:172] (0xc000a964d0) (0xc000948000) Create stream\nI0601 14:39:52.672705 3586 log.go:172] (0xc000a964d0) (0xc000948000) Stream added, broadcasting: 3\nI0601 14:39:52.673995 3586 log.go:172] (0xc000a964d0) Reply frame received for 3\nI0601 14:39:52.674032 3586 log.go:172] (0xc000a964d0) (0xc000660a00) Create stream\nI0601 14:39:52.674043 3586 log.go:172] (0xc000a964d0) (0xc000660a00) Stream added, broadcasting: 5\nI0601 14:39:52.674917 3586 log.go:172] (0xc000a964d0) Reply frame received for 5\nI0601 14:39:52.756989 3586 log.go:172] (0xc000a964d0) Data frame received for 5\nI0601 14:39:52.757035 3586 log.go:172] (0xc000660a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 14:39:52.757104 3586 log.go:172] (0xc000a964d0) Data frame received for 3\nI0601 14:39:52.757502 3586 log.go:172] (0xc000948000) (3) Data frame handling\nI0601 14:39:52.757533 3586 log.go:172] (0xc000948000) (3) Data frame sent\nI0601 14:39:52.757550 3586 log.go:172] (0xc000a964d0) Data frame received for 3\nI0601 14:39:52.757574 3586 log.go:172] (0xc000660a00) (5) Data frame sent\nI0601 14:39:52.757597 3586 log.go:172] (0xc000a964d0) Data frame received for 5\nI0601 14:39:52.757604 3586 log.go:172] (0xc000660a00) (5) Data frame handling\nI0601 14:39:52.757622 3586 log.go:172] (0xc000948000) (3) Data frame handling\nI0601 14:39:52.758856 3586 log.go:172] (0xc000a964d0) Data frame received for 1\nI0601 14:39:52.758867 3586 log.go:172] (0xc000660960) (1) Data frame handling\nI0601 14:39:52.758874 3586 log.go:172] (0xc000660960) (1) Data frame sent\nI0601 14:39:52.759117 3586 log.go:172] (0xc000a964d0) (0xc000660960) Stream removed, broadcasting: 1\nI0601 14:39:52.759158 3586 log.go:172] (0xc000a964d0) Go away received\nI0601 14:39:52.759555 3586 log.go:172] (0xc000a964d0) (0xc000660960) Stream removed, broadcasting: 1\nI0601 14:39:52.759578 3586 log.go:172] (0xc000a964d0) (0xc000948000) Stream removed, broadcasting: 3\nI0601 14:39:52.759588 3586 log.go:172] (0xc000a964d0) (0xc000660a00) Stream removed, broadcasting: 5\n" Jun 1 14:39:52.766: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 14:39:52.766: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 14:39:52.769: INFO: Found 1 stateful pods, waiting for 3 Jun 1 14:40:02.775: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 1 14:40:02.775: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 1 14:40:02.775: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 1 14:40:02.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 14:40:03.020: INFO: stderr: "I0601 14:40:02.923379 3607 log.go:172] (0xc000410630) (0xc00063aaa0) Create stream\nI0601 14:40:02.923444 3607 log.go:172] (0xc000410630) (0xc00063aaa0) Stream added, broadcasting: 1\nI0601 14:40:02.925482 3607 log.go:172] (0xc000410630) Reply frame received for 1\nI0601 14:40:02.925606 3607 log.go:172] (0xc000410630) (0xc0009bc000) Create stream\nI0601 14:40:02.925672 3607 log.go:172] (0xc000410630) (0xc0009bc000) Stream added, broadcasting: 3\nI0601 14:40:02.927045 3607 log.go:172] (0xc000410630) Reply frame received for 3\nI0601 14:40:02.927143 3607 log.go:172] (0xc000410630) (0xc00063a1e0) Create stream\nI0601 14:40:02.927159 3607 log.go:172] (0xc000410630) (0xc00063a1e0) Stream added, broadcasting: 5\nI0601 14:40:02.927876 3607 log.go:172] (0xc000410630) Reply frame received for 5\nI0601 14:40:03.013429 3607 log.go:172] (0xc000410630) Data frame received for 3\nI0601 14:40:03.013464 3607 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0601 14:40:03.013483 3607 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0601 14:40:03.013511 3607 log.go:172] (0xc000410630) Data frame received for 5\nI0601 14:40:03.013520 3607 log.go:172] (0xc00063a1e0) (5) Data frame handling\nI0601 14:40:03.013529 3607 log.go:172] (0xc00063a1e0) (5) Data frame sent\nI0601 14:40:03.013542 3607 log.go:172] (0xc000410630) Data frame received for 5\nI0601 14:40:03.013553 3607 log.go:172] (0xc00063a1e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 14:40:03.013608 3607 log.go:172] (0xc000410630) Data frame received for 3\nI0601 14:40:03.013640 3607 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0601 14:40:03.014804 3607 log.go:172] (0xc000410630) Data frame received for 1\nI0601 14:40:03.014827 3607 log.go:172] (0xc00063aaa0) (1) Data frame handling\nI0601 14:40:03.014840 3607 log.go:172] (0xc00063aaa0) (1) Data frame sent\nI0601 14:40:03.014869 3607 log.go:172] (0xc000410630) (0xc00063aaa0) Stream removed, broadcasting: 1\nI0601 14:40:03.014902 3607 log.go:172] (0xc000410630) Go away received\nI0601 14:40:03.015293 3607 log.go:172] (0xc000410630) (0xc00063aaa0) Stream removed, broadcasting: 1\nI0601 14:40:03.015317 3607 log.go:172] (0xc000410630) (0xc0009bc000) Stream removed, broadcasting: 3\nI0601 14:40:03.015329 3607 log.go:172] (0xc000410630) (0xc00063a1e0) Stream removed, broadcasting: 5\n" Jun 1 14:40:03.021: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 14:40:03.021: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 14:40:03.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 14:40:03.313: INFO: stderr: "I0601 14:40:03.170635 3627 log.go:172] (0xc000a0c420) (0xc0002d86e0) Create stream\nI0601 14:40:03.170693 3627 log.go:172] (0xc000a0c420) (0xc0002d86e0) Stream added, broadcasting: 1\nI0601 14:40:03.172841 3627 log.go:172] (0xc000a0c420) Reply frame received for 1\nI0601 14:40:03.172898 3627 log.go:172] (0xc000a0c420) (0xc0008fc000) Create stream\nI0601 14:40:03.172915 3627 log.go:172] (0xc000a0c420) (0xc0008fc000) Stream added, broadcasting: 3\nI0601 14:40:03.173884 3627 log.go:172] (0xc000a0c420) Reply frame received for 3\nI0601 14:40:03.173924 3627 log.go:172] (0xc000a0c420) (0xc0009ba000) Create stream\nI0601 14:40:03.173941 3627 log.go:172] (0xc000a0c420) (0xc0009ba000) Stream added, broadcasting: 5\nI0601 14:40:03.174798 3627 log.go:172] (0xc000a0c420) Reply frame received for 5\nI0601 14:40:03.263393 3627 log.go:172] (0xc000a0c420) Data frame received for 5\nI0601 14:40:03.263440 3627 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0601 14:40:03.263468 3627 log.go:172] (0xc0009ba000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 14:40:03.304359 3627 log.go:172] (0xc000a0c420) Data frame received for 3\nI0601 14:40:03.304408 3627 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0601 14:40:03.304448 3627 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0601 14:40:03.304903 3627 log.go:172] (0xc000a0c420) Data frame received for 5\nI0601 14:40:03.304945 3627 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0601 14:40:03.304985 3627 log.go:172] (0xc000a0c420) Data frame received for 3\nI0601 14:40:03.305269 3627 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0601 14:40:03.307509 3627 log.go:172] (0xc000a0c420) Data frame received for 1\nI0601 14:40:03.307546 3627 log.go:172] (0xc0002d86e0) (1) Data frame handling\nI0601 14:40:03.307572 3627 log.go:172] (0xc0002d86e0) (1) Data frame sent\nI0601 14:40:03.307745 3627 log.go:172] (0xc000a0c420) (0xc0002d86e0) Stream removed, broadcasting: 1\nI0601 14:40:03.307788 3627 log.go:172] (0xc000a0c420) Go away received\nI0601 14:40:03.308283 3627 log.go:172] (0xc000a0c420) (0xc0002d86e0) Stream removed, broadcasting: 1\nI0601 14:40:03.308305 3627 log.go:172] (0xc000a0c420) (0xc0008fc000) Stream removed, broadcasting: 3\nI0601 14:40:03.308317 3627 log.go:172] (0xc000a0c420) (0xc0009ba000) Stream removed, broadcasting: 5\n" Jun 1 14:40:03.313: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 14:40:03.313: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 14:40:03.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 1 14:40:03.559: INFO: stderr: "I0601 14:40:03.446972 3649 log.go:172] (0xc00012ae70) (0xc00076e6e0) Create stream\nI0601 14:40:03.447026 3649 log.go:172] (0xc00012ae70) (0xc00076e6e0) Stream added, broadcasting: 1\nI0601 14:40:03.449599 3649 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0601 14:40:03.449677 3649 log.go:172] (0xc00012ae70) (0xc0009dc000) Create stream\nI0601 14:40:03.449721 3649 log.go:172] (0xc00012ae70) (0xc0009dc000) Stream added, broadcasting: 3\nI0601 14:40:03.450720 3649 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0601 14:40:03.450781 3649 log.go:172] (0xc00012ae70) (0xc00083a000) Create stream\nI0601 14:40:03.450801 3649 log.go:172] (0xc00012ae70) (0xc00083a000) Stream added, broadcasting: 5\nI0601 14:40:03.451816 3649 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0601 14:40:03.522157 3649 log.go:172] (0xc00012ae70) Data frame received for 5\nI0601 14:40:03.522183 3649 log.go:172] (0xc00083a000) (5) Data frame handling\nI0601 14:40:03.522197 3649 log.go:172] (0xc00083a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0601 14:40:03.550385 3649 log.go:172] (0xc00012ae70) Data frame received for 5\nI0601 14:40:03.550428 3649 log.go:172] (0xc00083a000) (5) Data frame handling\nI0601 14:40:03.550451 3649 log.go:172] (0xc00012ae70) Data frame received for 3\nI0601 14:40:03.550462 3649 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0601 14:40:03.550475 3649 log.go:172] (0xc0009dc000) (3) Data frame sent\nI0601 14:40:03.550723 3649 log.go:172] (0xc00012ae70) Data frame received for 3\nI0601 14:40:03.550751 3649 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0601 14:40:03.552774 3649 log.go:172] (0xc00012ae70) Data frame received for 1\nI0601 14:40:03.552800 3649 log.go:172] (0xc00076e6e0) (1) Data frame handling\nI0601 14:40:03.552823 3649 log.go:172] (0xc00076e6e0) (1) Data frame sent\nI0601 14:40:03.553346 3649 log.go:172] (0xc00012ae70) (0xc00076e6e0) Stream removed, broadcasting: 1\nI0601 14:40:03.553419 3649 log.go:172] (0xc00012ae70) Go away received\nI0601 14:40:03.553716 3649 log.go:172] (0xc00012ae70) (0xc00076e6e0) Stream removed, broadcasting: 1\nI0601 14:40:03.553741 3649 log.go:172] (0xc00012ae70) (0xc0009dc000) Stream removed, broadcasting: 3\nI0601 14:40:03.553756 3649 log.go:172] (0xc00012ae70) (0xc00083a000) Stream removed, broadcasting: 5\n" Jun 1 14:40:03.559: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 1 14:40:03.559: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 1 14:40:03.559: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 14:40:03.562: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 1 14:40:13.571: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 1 14:40:13.571: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 1 14:40:13.571: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 1 14:40:13.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999549s Jun 1 14:40:14.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991765126s Jun 1 14:40:15.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987206416s Jun 1 14:40:16.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981405908s Jun 1 14:40:17.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975577647s Jun 1 14:40:18.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97172814s Jun 1 14:40:19.617: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966043473s Jun 1 14:40:20.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960729757s Jun 1 14:40:21.627: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955900529s Jun 1 14:40:22.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.67218ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1316 Jun 1 14:40:23.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 14:40:23.866: INFO: stderr: "I0601 14:40:23.763518 3670 log.go:172] (0xc000116840) (0xc0003fe820) Create stream\nI0601 14:40:23.763596 3670 log.go:172] (0xc000116840) (0xc0003fe820) Stream added, broadcasting: 1\nI0601 14:40:23.767421 3670 log.go:172] (0xc000116840) Reply frame received for 1\nI0601 14:40:23.767473 3670 log.go:172] (0xc000116840) (0xc0003fe140) Create stream\nI0601 14:40:23.767486 3670 log.go:172] (0xc000116840) (0xc0003fe140) Stream added, broadcasting: 3\nI0601 14:40:23.768457 3670 log.go:172] (0xc000116840) Reply frame received for 3\nI0601 14:40:23.768507 3670 log.go:172] (0xc000116840) (0xc0008a2000) Create stream\nI0601 14:40:23.768524 3670 log.go:172] (0xc000116840) (0xc0008a2000) Stream added, broadcasting: 5\nI0601 14:40:23.769695 3670 log.go:172] (0xc000116840) Reply frame received for 5\nI0601 14:40:23.858528 3670 log.go:172] (0xc000116840) Data frame received for 3\nI0601 14:40:23.858565 3670 log.go:172] (0xc0003fe140) (3) Data frame handling\nI0601 14:40:23.858573 3670 log.go:172] (0xc0003fe140) (3) Data frame sent\nI0601 14:40:23.858578 3670 log.go:172] (0xc000116840) Data frame received for 3\nI0601 14:40:23.858584 3670 log.go:172] (0xc0003fe140) (3) Data frame handling\nI0601 14:40:23.858591 3670 log.go:172] (0xc000116840) Data frame received for 5\nI0601 14:40:23.858596 3670 log.go:172] (0xc0008a2000) (5) Data frame handling\nI0601 14:40:23.858610 3670 log.go:172] (0xc0008a2000) (5) Data frame sent\nI0601 14:40:23.858622 3670 log.go:172] (0xc000116840) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 14:40:23.858629 3670 log.go:172] (0xc0008a2000) (5) Data frame handling\nI0601 14:40:23.860109 3670 log.go:172] (0xc000116840) Data frame received for 1\nI0601 14:40:23.860144 3670 log.go:172] (0xc0003fe820) (1) Data frame handling\nI0601 14:40:23.860166 3670 log.go:172] (0xc0003fe820) (1) Data frame sent\nI0601 14:40:23.860182 3670 log.go:172] (0xc000116840) (0xc0003fe820) Stream removed, broadcasting: 1\nI0601 14:40:23.860200 3670 log.go:172] (0xc000116840) Go away received\nI0601 14:40:23.860503 3670 log.go:172] (0xc000116840) (0xc0003fe820) Stream removed, broadcasting: 1\nI0601 14:40:23.860522 3670 log.go:172] (0xc000116840) (0xc0003fe140) Stream removed, broadcasting: 3\nI0601 14:40:23.860532 3670 log.go:172] (0xc000116840) (0xc0008a2000) Stream removed, broadcasting: 5\n" Jun 1 14:40:23.866: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 14:40:23.866: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 14:40:23.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 14:40:24.078: INFO: stderr: "I0601 14:40:23.993318 3691 log.go:172] (0xc000a00420) (0xc0005fa820) Create stream\nI0601 14:40:23.993387 3691 log.go:172] (0xc000a00420) (0xc0005fa820) Stream added, broadcasting: 1\nI0601 14:40:23.996128 3691 log.go:172] (0xc000a00420) Reply frame received for 1\nI0601 14:40:23.996186 3691 log.go:172] (0xc000a00420) (0xc0005fa000) Create stream\nI0601 14:40:23.996203 3691 log.go:172] (0xc000a00420) (0xc0005fa000) Stream added, broadcasting: 3\nI0601 14:40:23.997017 3691 log.go:172] (0xc000a00420) Reply frame received for 3\nI0601 14:40:23.997049 3691 log.go:172] (0xc000a00420) (0xc0005fa140) Create stream\nI0601 14:40:23.997059 3691 log.go:172] (0xc000a00420) (0xc0005fa140) Stream added, broadcasting: 5\nI0601 14:40:23.997867 3691 log.go:172] (0xc000a00420) Reply frame received for 5\nI0601 14:40:24.071684 3691 log.go:172] (0xc000a00420) Data frame received for 3\nI0601 14:40:24.071731 3691 log.go:172] (0xc0005fa000) (3) Data frame handling\nI0601 14:40:24.071752 3691 log.go:172] (0xc0005fa000) (3) Data frame sent\nI0601 14:40:24.071770 3691 log.go:172] (0xc000a00420) Data frame received for 3\nI0601 14:40:24.071785 3691 log.go:172] (0xc0005fa000) (3) Data frame handling\nI0601 14:40:24.071829 3691 log.go:172] (0xc000a00420) Data frame received for 5\nI0601 14:40:24.071848 3691 log.go:172] (0xc0005fa140) (5) Data frame handling\nI0601 14:40:24.071868 3691 log.go:172] (0xc0005fa140) (5) Data frame sent\nI0601 14:40:24.071897 3691 log.go:172] (0xc000a00420) Data frame received for 5\nI0601 14:40:24.071916 3691 log.go:172] (0xc0005fa140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 14:40:24.073619 3691 log.go:172] (0xc000a00420) Data frame received for 1\nI0601 14:40:24.073669 3691 log.go:172] (0xc0005fa820) (1) Data frame handling\nI0601 14:40:24.073697 3691 log.go:172] (0xc0005fa820) (1) Data frame sent\nI0601 14:40:24.073716 3691 log.go:172] (0xc000a00420) (0xc0005fa820) Stream removed, broadcasting: 1\nI0601 14:40:24.073735 3691 log.go:172] (0xc000a00420) Go away received\nI0601 14:40:24.074137 3691 log.go:172] (0xc000a00420) (0xc0005fa820) Stream removed, broadcasting: 1\nI0601 14:40:24.074157 3691 log.go:172] (0xc000a00420) (0xc0005fa000) Stream removed, broadcasting: 3\nI0601 14:40:24.074167 3691 log.go:172] (0xc000a00420) (0xc0005fa140) Stream removed, broadcasting: 5\n" Jun 1 14:40:24.078: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 14:40:24.078: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 14:40:24.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1316 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 1 14:40:24.272: INFO: stderr: "I0601 14:40:24.203475 3712 log.go:172] (0xc0009e0420) (0xc0005406e0) Create stream\nI0601 14:40:24.203542 3712 log.go:172] (0xc0009e0420) (0xc0005406e0) Stream added, broadcasting: 1\nI0601 14:40:24.206114 3712 log.go:172] (0xc0009e0420) Reply frame received for 1\nI0601 14:40:24.206160 3712 log.go:172] (0xc0009e0420) (0xc00039e500) Create stream\nI0601 14:40:24.206179 3712 log.go:172] (0xc0009e0420) (0xc00039e500) Stream added, broadcasting: 3\nI0601 14:40:24.206942 3712 log.go:172] (0xc0009e0420) Reply frame received for 3\nI0601 14:40:24.206979 3712 log.go:172] (0xc0009e0420) (0xc00039e5a0) Create stream\nI0601 14:40:24.206988 3712 log.go:172] (0xc0009e0420) (0xc00039e5a0) Stream added, broadcasting: 5\nI0601 14:40:24.207774 3712 log.go:172] (0xc0009e0420) Reply frame received for 5\nI0601 14:40:24.265890 3712 log.go:172] (0xc0009e0420) Data frame received for 5\nI0601 14:40:24.265919 3712 log.go:172] (0xc00039e5a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0601 14:40:24.265952 3712 log.go:172] (0xc0009e0420) Data frame received for 3\nI0601 14:40:24.265991 3712 log.go:172] (0xc00039e500) (3) Data frame handling\nI0601 14:40:24.266007 3712 log.go:172] (0xc00039e500) (3) Data frame sent\nI0601 14:40:24.266020 3712 log.go:172] (0xc0009e0420) Data frame received for 3\nI0601 14:40:24.266030 3712 log.go:172] (0xc00039e500) (3) Data frame handling\nI0601 14:40:24.266067 3712 log.go:172] (0xc00039e5a0) (5) Data frame sent\nI0601 14:40:24.266099 3712 log.go:172] (0xc0009e0420) Data frame received for 5\nI0601 14:40:24.266108 3712 log.go:172] (0xc00039e5a0) (5) Data frame handling\nI0601 14:40:24.267143 3712 log.go:172] (0xc0009e0420) Data frame received for 1\nI0601 14:40:24.267169 3712 log.go:172] (0xc0005406e0) (1) Data frame handling\nI0601 14:40:24.267181 3712 log.go:172] (0xc0005406e0) (1) Data frame sent\nI0601 14:40:24.267305 3712 log.go:172] (0xc0009e0420) (0xc0005406e0) Stream removed, broadcasting: 1\nI0601 14:40:24.267357 3712 log.go:172] (0xc0009e0420) Go away received\nI0601 14:40:24.267678 3712 log.go:172] (0xc0009e0420) (0xc0005406e0) Stream removed, broadcasting: 1\nI0601 14:40:24.267697 3712 log.go:172] (0xc0009e0420) (0xc00039e500) Stream removed, broadcasting: 3\nI0601 14:40:24.267707 3712 log.go:172] (0xc0009e0420) (0xc00039e5a0) Stream removed, broadcasting: 5\n" Jun 1 14:40:24.272: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 1 14:40:24.272: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 1 14:40:24.272: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 1 14:40:54.286: INFO: Deleting all statefulset in ns statefulset-1316 Jun 1 14:40:54.289: INFO: Scaling statefulset ss to 0 Jun 1 14:40:54.298: INFO: Waiting for statefulset status.replicas updated to 0 Jun 1 14:40:54.300: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:40:54.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1316" for this suite. Jun 1 14:41:00.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:41:00.410: INFO: namespace statefulset-1316 deletion completed in 6.091421669s • [SLOW TEST:98.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:41:00.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 1 14:41:00.481: INFO: Waiting up to 5m0s for pod "downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf" in namespace "downward-api-9182" to be "success or failure" Jun 1 14:41:00.484: INFO: Pod "downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165353ms Jun 1 14:41:02.539: INFO: Pod "downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057331859s Jun 1 14:41:04.543: INFO: Pod "downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061623271s STEP: Saw pod success Jun 1 14:41:04.543: INFO: Pod "downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf" satisfied condition "success or failure" Jun 1 14:41:04.546: INFO: Trying to get logs from node iruya-worker2 pod downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf container dapi-container: STEP: delete the pod Jun 1 14:41:04.570: INFO: Waiting for pod downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf to disappear Jun 1 14:41:04.706: INFO: Pod downward-api-857611bb-7b05-4e37-847f-b7ad34574ecf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:41:04.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9182" for this suite. Jun 1 14:41:10.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:41:10.819: INFO: namespace downward-api-9182 deletion completed in 6.106057531s • [SLOW TEST:10.408 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:41:10.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 1 14:41:11.377: INFO: Pod name wrapped-volume-race-204edf15-bb4f-4209-9ec7-dbd6afde276a: Found 0 pods out of 5 Jun 1 14:41:16.387: INFO: Pod name wrapped-volume-race-204edf15-bb4f-4209-9ec7-dbd6afde276a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-204edf15-bb4f-4209-9ec7-dbd6afde276a in namespace emptydir-wrapper-3253, will wait for the garbage collector to delete the pods Jun 1 14:41:30.482: INFO: Deleting ReplicationController wrapped-volume-race-204edf15-bb4f-4209-9ec7-dbd6afde276a took: 17.209447ms Jun 1 14:41:30.782: INFO: Terminating ReplicationController wrapped-volume-race-204edf15-bb4f-4209-9ec7-dbd6afde276a pods took: 300.28931ms STEP: Creating RC which spawns configmap-volume pods Jun 1 14:42:12.419: INFO: Pod name wrapped-volume-race-aadf820f-a2c6-4b25-96bc-205155e2263c: Found 0 pods out of 5 Jun 1 14:42:17.447: INFO: Pod name wrapped-volume-race-aadf820f-a2c6-4b25-96bc-205155e2263c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aadf820f-a2c6-4b25-96bc-205155e2263c in namespace emptydir-wrapper-3253, will wait for the garbage collector to delete the pods Jun 1 14:42:31.694: INFO: Deleting ReplicationController wrapped-volume-race-aadf820f-a2c6-4b25-96bc-205155e2263c took: 12.018428ms Jun 1 14:42:31.995: INFO: Terminating ReplicationController wrapped-volume-race-aadf820f-a2c6-4b25-96bc-205155e2263c pods took: 300.353565ms STEP: Creating RC which spawns configmap-volume pods Jun 1 14:43:12.425: INFO: Pod name wrapped-volume-race-c704b7fc-1fb8-408e-b3fa-0f054d45febd: Found 0 pods out of 5 Jun 1 14:43:17.432: INFO: Pod name wrapped-volume-race-c704b7fc-1fb8-408e-b3fa-0f054d45febd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c704b7fc-1fb8-408e-b3fa-0f054d45febd in namespace emptydir-wrapper-3253, will wait for the garbage collector to delete the pods Jun 1 14:43:31.518: INFO: Deleting ReplicationController wrapped-volume-race-c704b7fc-1fb8-408e-b3fa-0f054d45febd took: 10.242705ms Jun 1 14:43:31.818: INFO: Terminating ReplicationController wrapped-volume-race-c704b7fc-1fb8-408e-b3fa-0f054d45febd pods took: 300.199203ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:44:13.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3253" for this suite. Jun 1 14:44:21.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:44:22.071: INFO: namespace emptydir-wrapper-3253 deletion completed in 8.100610132s • [SLOW TEST:191.251 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:44:22.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b2362a46-3bd6-4e5e-be01-664b889cbd61 STEP: Creating a pod to test consume secrets Jun 1 14:44:22.179: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76" in namespace "projected-7508" to be "success or failure" Jun 1 14:44:22.195: INFO: Pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76": Phase="Pending", Reason="", readiness=false. Elapsed: 15.632656ms Jun 1 14:44:24.200: INFO: Pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020522815s Jun 1 14:44:26.204: INFO: Pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76": Phase="Running", Reason="", readiness=true. Elapsed: 4.024819003s Jun 1 14:44:28.210: INFO: Pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030177158s STEP: Saw pod success Jun 1 14:44:28.210: INFO: Pod "pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76" satisfied condition "success or failure" Jun 1 14:44:28.213: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76 container projected-secret-volume-test: STEP: delete the pod Jun 1 14:44:28.266: INFO: Waiting for pod pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76 to disappear Jun 1 14:44:28.270: INFO: Pod pod-projected-secrets-00a110a8-7222-42cd-a12c-5a2ccc0ffb76 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:44:28.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7508" for this suite. Jun 1 14:44:34.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:44:34.382: INFO: namespace projected-7508 deletion completed in 6.108619432s • [SLOW TEST:12.310 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:44:34.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 1 14:44:34.459: INFO: Waiting up to 5m0s for pod "downward-api-a9fe971f-7d05-4c30-97de-0248993924b3" in namespace "downward-api-850" to be "success or failure" Jun 1 14:44:34.462: INFO: Pod "downward-api-a9fe971f-7d05-4c30-97de-0248993924b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557741ms Jun 1 14:44:36.466: INFO: Pod "downward-api-a9fe971f-7d05-4c30-97de-0248993924b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006680318s Jun 1 14:44:38.470: INFO: Pod "downward-api-a9fe971f-7d05-4c30-97de-0248993924b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011164879s STEP: Saw pod success Jun 1 14:44:38.470: INFO: Pod "downward-api-a9fe971f-7d05-4c30-97de-0248993924b3" satisfied condition "success or failure" Jun 1 14:44:38.474: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a9fe971f-7d05-4c30-97de-0248993924b3 container dapi-container: STEP: delete the pod Jun 1 14:44:38.560: INFO: Waiting for pod downward-api-a9fe971f-7d05-4c30-97de-0248993924b3 to disappear Jun 1 14:44:38.570: INFO: Pod downward-api-a9fe971f-7d05-4c30-97de-0248993924b3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:44:38.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-850" for this suite. Jun 1 14:44:44.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:44:44.670: INFO: namespace downward-api-850 deletion completed in 6.096018707s • [SLOW TEST:10.288 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:44:44.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 1 14:44:44.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 1 14:44:47.779: INFO: stderr: "" Jun 1 14:44:47.779: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:44:47.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5655" for this suite. Jun 1 14:44:53.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:44:53.877: INFO: namespace kubectl-5655 deletion completed in 6.093157094s • [SLOW TEST:9.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 1 14:44:53.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 1 14:44:58.466: INFO: Successfully updated pod "labelsupdate466cdeae-3fb5-4b34-ae83-99073379f6d7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 1 14:45:02.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9087" for this suite. Jun 1 14:45:24.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 1 14:45:24.612: INFO: namespace downward-api-9087 deletion completed in 22.101325201s • [SLOW TEST:30.735 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSJun 1 14:45:24.613: INFO: Running AfterSuite actions on all nodes Jun 1 14:45:24.613: INFO: Running AfterSuite actions on node 1 Jun 1 14:45:24.613: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6570.252 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS