I0104 11:29:47.539061 8 e2e.go:243] Starting e2e run "82b82fc6-6f90-4890-a172-e46bba02a8db" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578137386 - Will randomize all specs Will run 215 of 4412 specs Jan 4 11:29:47.842: INFO: >>> kubeConfig: /root/.kube/config Jan 4 11:29:47.846: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 11:29:47.875: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 11:29:47.910: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 11:29:47.910: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 11:29:47.910: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 11:29:47.921: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 11:29:47.921: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 11:29:47.921: INFO: e2e test version: v1.15.7 Jan 4 11:29:47.929: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:29:47.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Jan 4 11:29:48.065: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6041 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-6041 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6041 Jan 4 11:29:48.109: INFO: Found 0 stateful pods, waiting for 1 Jan 4 11:29:58.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 4 11:29:58.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:30:01.051: INFO: stderr: "I0104 11:30:00.689732 29 log.go:172] (0xc0007dc420) (0xc0007d6820) Create stream\nI0104 11:30:00.689826 29 log.go:172] (0xc0007dc420) (0xc0007d6820) Stream added, broadcasting: 1\nI0104 11:30:00.700240 29 log.go:172] (0xc0007dc420) Reply frame received for 1\nI0104 11:30:00.700279 29 log.go:172] (0xc0007dc420) (0xc000551b80) Create stream\nI0104 11:30:00.700288 29 log.go:172] (0xc0007dc420) (0xc000551b80) Stream added, broadcasting: 3\nI0104 11:30:00.701955 29 log.go:172] (0xc0007dc420) Reply frame received for 3\nI0104 11:30:00.701983 29 log.go:172] (0xc0007dc420) (0xc000584320) Create stream\nI0104 11:30:00.701992 29 log.go:172] (0xc0007dc420) (0xc000584320) Stream added, broadcasting: 5\nI0104 11:30:00.703440 29 log.go:172] (0xc0007dc420) Reply frame received for 5\nI0104 11:30:00.867536 29 log.go:172] (0xc0007dc420) Data frame received for 5\nI0104 11:30:00.867623 29 log.go:172] (0xc000584320) (5) Data frame handling\nI0104 11:30:00.867649 29 log.go:172] (0xc000584320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 11:30:00.913891 29 log.go:172] (0xc0007dc420) Data frame received for 3\nI0104 11:30:00.913932 29 log.go:172] (0xc000551b80) (3) Data frame handling\nI0104 11:30:00.913948 29 log.go:172] (0xc000551b80) (3) Data frame sent\nI0104 11:30:01.044803 29 log.go:172] (0xc0007dc420) (0xc000551b80) Stream removed, broadcasting: 3\nI0104 11:30:01.044858 29 log.go:172] (0xc0007dc420) Data frame received for 1\nI0104 11:30:01.044869 29 log.go:172] (0xc0007d6820) (1) Data frame handling\nI0104 11:30:01.044896 29 log.go:172] (0xc0007d6820) (1) Data frame sent\nI0104 11:30:01.045000 29 log.go:172] (0xc0007dc420) (0xc0007d6820) Stream removed, broadcasting: 1\nI0104 11:30:01.045085 29 log.go:172] (0xc0007dc420) (0xc000584320) Stream removed, broadcasting: 5\nI0104 11:30:01.045243 29 log.go:172] (0xc0007dc420) Go away received\nI0104 11:30:01.045482 29 log.go:172] (0xc0007dc420) (0xc0007d6820) Stream removed, broadcasting: 1\nI0104 11:30:01.045501 29 log.go:172] (0xc0007dc420) (0xc000551b80) Stream removed, broadcasting: 3\nI0104 11:30:01.045513 29 log.go:172] (0xc0007dc420) (0xc000584320) Stream removed, broadcasting: 5\n" Jan 4 11:30:01.051: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:30:01.051: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 11:30:01.060: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 4 11:30:11.069: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 11:30:11.070: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 11:30:11.093: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:11.093: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:11.093: INFO: Jan 4 11:30:11.093: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 4 11:30:12.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987413298s Jan 4 11:30:13.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.926592199s Jan 4 11:30:14.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.89945635s Jan 4 11:30:15.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.886316774s Jan 4 11:30:16.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.879431018s Jan 4 11:30:17.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.861304198s Jan 4 11:30:18.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.549957119s Jan 4 11:30:21.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.530443819s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6041 Jan 4 11:30:22.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:30:23.398: INFO: stderr: "I0104 11:30:22.745568 53 log.go:172] (0xc0005466e0) (0xc000456dc0) Create stream\nI0104 11:30:22.745671 53 log.go:172] (0xc0005466e0) (0xc000456dc0) Stream added, broadcasting: 1\nI0104 11:30:22.751203 53 log.go:172] (0xc0005466e0) Reply frame received for 1\nI0104 11:30:22.751294 53 log.go:172] (0xc0005466e0) (0xc0006a4000) Create stream\nI0104 11:30:22.751318 53 log.go:172] (0xc0005466e0) (0xc0006a4000) Stream added, broadcasting: 3\nI0104 11:30:22.753235 53 log.go:172] (0xc0005466e0) Reply frame received for 3\nI0104 11:30:22.753277 53 log.go:172] (0xc0005466e0) (0xc0004565a0) Create stream\nI0104 11:30:22.753295 53 log.go:172] (0xc0005466e0) (0xc0004565a0) Stream added, broadcasting: 5\nI0104 11:30:22.754494 53 log.go:172] (0xc0005466e0) Reply frame received for 5\nI0104 11:30:23.109730 53 log.go:172] (0xc0005466e0) Data frame received for 3\nI0104 11:30:23.110035 53 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0104 11:30:23.110059 53 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0104 11:30:23.110087 53 log.go:172] (0xc0005466e0) Data frame received for 5\nI0104 11:30:23.110102 53 log.go:172] (0xc0004565a0) (5) Data frame handling\nI0104 11:30:23.110114 53 log.go:172] (0xc0004565a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 11:30:23.390261 53 log.go:172] (0xc0005466e0) (0xc0006a4000) Stream removed, broadcasting: 3\nI0104 11:30:23.390405 53 log.go:172] (0xc0005466e0) Data frame received for 1\nI0104 11:30:23.390444 53 log.go:172] (0xc0005466e0) (0xc0004565a0) Stream removed, broadcasting: 5\nI0104 11:30:23.390484 53 log.go:172] (0xc000456dc0) (1) Data frame handling\nI0104 11:30:23.390509 53 log.go:172] (0xc000456dc0) (1) Data frame sent\nI0104 11:30:23.390515 53 log.go:172] (0xc0005466e0) (0xc000456dc0) Stream removed, broadcasting: 1\nI0104 11:30:23.390755 53 log.go:172] (0xc0005466e0) (0xc000456dc0) Stream removed, broadcasting: 1\nI0104 11:30:23.390770 53 log.go:172] (0xc0005466e0) (0xc0006a4000) Stream removed, broadcasting: 3\nI0104 11:30:23.390774 53 log.go:172] (0xc0005466e0) (0xc0004565a0) Stream removed, broadcasting: 5\nI0104 11:30:23.390922 53 log.go:172] (0xc0005466e0) Go away received\n" Jan 4 11:30:23.398: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 11:30:23.398: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 11:30:23.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:30:24.088: INFO: stderr: "I0104 11:30:23.602899 71 log.go:172] (0xc00081c630) (0xc000804960) Create stream\nI0104 11:30:23.603196 71 log.go:172] (0xc00081c630) (0xc000804960) Stream added, broadcasting: 1\nI0104 11:30:23.616165 71 log.go:172] (0xc00081c630) Reply frame received for 1\nI0104 11:30:23.616253 71 log.go:172] (0xc00081c630) (0xc000972000) Create stream\nI0104 11:30:23.616270 71 log.go:172] (0xc00081c630) (0xc000972000) Stream added, broadcasting: 3\nI0104 11:30:23.623822 71 log.go:172] (0xc00081c630) Reply frame received for 3\nI0104 11:30:23.623875 71 log.go:172] (0xc00081c630) (0xc000804000) Create stream\nI0104 11:30:23.623886 71 log.go:172] (0xc00081c630) (0xc000804000) Stream added, broadcasting: 5\nI0104 11:30:23.626031 71 log.go:172] (0xc00081c630) Reply frame received for 5\nI0104 11:30:23.875480 71 log.go:172] (0xc00081c630) Data frame received for 5\nI0104 11:30:23.875584 71 log.go:172] (0xc000804000) (5) Data frame handling\nI0104 11:30:23.875601 71 log.go:172] (0xc000804000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 11:30:23.936707 71 log.go:172] (0xc00081c630) Data frame received for 3\nI0104 11:30:23.936814 71 log.go:172] (0xc000972000) (3) Data frame handling\nI0104 11:30:23.936827 71 log.go:172] (0xc000972000) (3) Data frame sent\nI0104 11:30:23.936889 71 log.go:172] (0xc00081c630) Data frame received for 5\nI0104 11:30:23.936899 71 log.go:172] (0xc000804000) (5) Data frame handling\nI0104 11:30:23.936905 71 log.go:172] (0xc000804000) (5) Data frame sent\nI0104 11:30:23.936912 71 log.go:172] (0xc00081c630) Data frame received for 5\nI0104 11:30:23.936917 71 log.go:172] (0xc000804000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 11:30:23.936955 71 log.go:172] (0xc000804000) (5) Data frame sent\nI0104 11:30:24.080700 71 log.go:172] (0xc00081c630) (0xc000972000) Stream removed, broadcasting: 3\nI0104 11:30:24.080802 71 log.go:172] (0xc00081c630) Data frame received for 1\nI0104 11:30:24.080811 71 log.go:172] (0xc000804960) (1) Data frame handling\nI0104 11:30:24.080854 71 log.go:172] (0xc000804960) (1) Data frame sent\nI0104 11:30:24.080863 71 log.go:172] (0xc00081c630) (0xc000804960) Stream removed, broadcasting: 1\nI0104 11:30:24.080951 71 log.go:172] (0xc00081c630) (0xc000804000) Stream removed, broadcasting: 5\nI0104 11:30:24.081357 71 log.go:172] (0xc00081c630) Go away received\nI0104 11:30:24.081599 71 log.go:172] (0xc00081c630) (0xc000804960) Stream removed, broadcasting: 1\nI0104 11:30:24.081661 71 log.go:172] (0xc00081c630) (0xc000972000) Stream removed, broadcasting: 3\nI0104 11:30:24.081690 71 log.go:172] (0xc00081c630) (0xc000804000) Stream removed, broadcasting: 5\n" Jan 4 11:30:24.088: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 11:30:24.088: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 11:30:24.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:30:24.496: INFO: stderr: "I0104 11:30:24.272080 89 log.go:172] (0xc0003a6420) (0xc000870640) Create stream\nI0104 11:30:24.272381 89 log.go:172] (0xc0003a6420) (0xc000870640) Stream added, broadcasting: 1\nI0104 11:30:24.279033 89 log.go:172] (0xc0003a6420) Reply frame received for 1\nI0104 11:30:24.279081 89 log.go:172] (0xc0003a6420) (0xc000458320) Create stream\nI0104 11:30:24.279095 89 log.go:172] (0xc0003a6420) (0xc000458320) Stream added, broadcasting: 3\nI0104 11:30:24.280399 89 log.go:172] (0xc0003a6420) Reply frame received for 3\nI0104 11:30:24.280419 89 log.go:172] (0xc0003a6420) (0xc0008706e0) Create stream\nI0104 11:30:24.280428 89 log.go:172] (0xc0003a6420) (0xc0008706e0) Stream added, broadcasting: 5\nI0104 11:30:24.281559 89 log.go:172] (0xc0003a6420) Reply frame received for 5\nI0104 11:30:24.373737 89 log.go:172] (0xc0003a6420) Data frame received for 5\nI0104 11:30:24.373799 89 log.go:172] (0xc0008706e0) (5) Data frame handling\nI0104 11:30:24.373825 89 log.go:172] (0xc0008706e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 11:30:24.374360 89 log.go:172] (0xc0003a6420) Data frame received for 5\nI0104 11:30:24.374369 89 log.go:172] (0xc0008706e0) (5) Data frame handling\nI0104 11:30:24.374375 89 log.go:172] (0xc0008706e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 11:30:24.374539 89 log.go:172] (0xc0003a6420) Data frame received for 3\nI0104 11:30:24.374589 89 log.go:172] (0xc000458320) (3) Data frame handling\nI0104 11:30:24.374608 89 log.go:172] (0xc000458320) (3) Data frame sent\nI0104 11:30:24.486597 89 log.go:172] (0xc0003a6420) Data frame received for 1\nI0104 11:30:24.486683 89 log.go:172] (0xc0003a6420) (0xc000458320) Stream removed, broadcasting: 3\nI0104 11:30:24.486732 89 log.go:172] (0xc000870640) (1) Data frame handling\nI0104 11:30:24.486757 89 log.go:172] (0xc0003a6420) (0xc0008706e0) Stream removed, broadcasting: 5\nI0104 11:30:24.486796 89 log.go:172] (0xc000870640) (1) Data frame sent\nI0104 11:30:24.486811 89 log.go:172] (0xc0003a6420) (0xc000870640) Stream removed, broadcasting: 1\nI0104 11:30:24.486820 89 log.go:172] (0xc0003a6420) Go away received\nI0104 11:30:24.487394 89 log.go:172] (0xc0003a6420) (0xc000870640) Stream removed, broadcasting: 1\nI0104 11:30:24.487408 89 log.go:172] (0xc0003a6420) (0xc000458320) Stream removed, broadcasting: 3\nI0104 11:30:24.487423 89 log.go:172] (0xc0003a6420) (0xc0008706e0) Stream removed, broadcasting: 5\n" Jan 4 11:30:24.496: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 11:30:24.496: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 11:30:24.514: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:30:24.514: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:30:24.514: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Jan 4 11:30:34.535: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:30:34.535: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:30:34.535: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 4 11:30:34.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:30:35.266: INFO: stderr: "I0104 11:30:34.754716 108 log.go:172] (0xc00076c370) (0xc000740c80) Create stream\nI0104 11:30:34.754870 108 log.go:172] (0xc00076c370) (0xc000740c80) Stream added, broadcasting: 1\nI0104 11:30:34.769139 108 log.go:172] (0xc00076c370) Reply frame received for 1\nI0104 11:30:34.769210 108 log.go:172] (0xc00076c370) (0xc000740000) Create stream\nI0104 11:30:34.769222 108 log.go:172] (0xc00076c370) (0xc000740000) Stream added, broadcasting: 3\nI0104 11:30:34.771747 108 log.go:172] (0xc00076c370) Reply frame received for 3\nI0104 11:30:34.771824 108 log.go:172] (0xc00076c370) (0xc000010140) Create stream\nI0104 11:30:34.771844 108 log.go:172] (0xc00076c370) (0xc000010140) Stream added, broadcasting: 5\nI0104 11:30:34.777852 108 log.go:172] (0xc00076c370) Reply frame received for 5\nI0104 11:30:35.044356 108 log.go:172] (0xc00076c370) Data frame received for 5\nI0104 11:30:35.044428 108 log.go:172] (0xc000010140) (5) Data frame handling\nI0104 11:30:35.044444 108 log.go:172] (0xc000010140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 11:30:35.059740 108 log.go:172] (0xc00076c370) Data frame received for 3\nI0104 11:30:35.059805 108 log.go:172] (0xc000740000) (3) Data frame handling\nI0104 11:30:35.059853 108 log.go:172] (0xc000740000) (3) Data frame sent\nI0104 11:30:35.261936 108 log.go:172] (0xc00076c370) (0xc000740000) Stream removed, broadcasting: 3\nI0104 11:30:35.262069 108 log.go:172] (0xc00076c370) Data frame received for 1\nI0104 11:30:35.262078 108 log.go:172] (0xc000740c80) (1) Data frame handling\nI0104 11:30:35.262088 108 log.go:172] (0xc000740c80) (1) Data frame sent\nI0104 11:30:35.262091 108 log.go:172] (0xc00076c370) (0xc000740c80) Stream removed, broadcasting: 1\nI0104 11:30:35.262354 108 log.go:172] (0xc00076c370) (0xc000010140) Stream removed, broadcasting: 5\nI0104 11:30:35.262378 108 log.go:172] (0xc00076c370) (0xc000740c80) Stream removed, broadcasting: 1\nI0104 11:30:35.262386 108 log.go:172] (0xc00076c370) (0xc000740000) Stream removed, broadcasting: 3\nI0104 11:30:35.262392 108 log.go:172] (0xc00076c370) (0xc000010140) Stream removed, broadcasting: 5\nI0104 11:30:35.262498 108 log.go:172] (0xc00076c370) Go away received\n" Jan 4 11:30:35.266: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:30:35.266: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 11:30:35.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:30:35.664: INFO: stderr: "I0104 11:30:35.402925 120 log.go:172] (0xc0008ba580) (0xc0005d4be0) Create stream\nI0104 11:30:35.403099 120 log.go:172] (0xc0008ba580) (0xc0005d4be0) Stream added, broadcasting: 1\nI0104 11:30:35.406280 120 log.go:172] (0xc0008ba580) Reply frame received for 1\nI0104 11:30:35.406302 120 log.go:172] (0xc0008ba580) (0xc0008b2000) Create stream\nI0104 11:30:35.406309 120 log.go:172] (0xc0008ba580) (0xc0008b2000) Stream added, broadcasting: 3\nI0104 11:30:35.407270 120 log.go:172] (0xc0008ba580) Reply frame received for 3\nI0104 11:30:35.407290 120 log.go:172] (0xc0008ba580) (0xc0005d4c80) Create stream\nI0104 11:30:35.407295 120 log.go:172] (0xc0008ba580) (0xc0005d4c80) Stream added, broadcasting: 5\nI0104 11:30:35.408602 120 log.go:172] (0xc0008ba580) Reply frame received for 5\nI0104 11:30:35.499190 120 log.go:172] (0xc0008ba580) Data frame received for 5\nI0104 11:30:35.499254 120 log.go:172] (0xc0005d4c80) (5) Data frame handling\nI0104 11:30:35.499270 120 log.go:172] (0xc0005d4c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 11:30:35.534303 120 log.go:172] (0xc0008ba580) Data frame received for 3\nI0104 11:30:35.534336 120 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0104 11:30:35.534350 120 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0104 11:30:35.658172 120 log.go:172] (0xc0008ba580) (0xc0008b2000) Stream removed, broadcasting: 3\nI0104 11:30:35.658402 120 log.go:172] (0xc0008ba580) Data frame received for 1\nI0104 11:30:35.658449 120 log.go:172] (0xc0008ba580) (0xc0005d4c80) Stream removed, broadcasting: 5\nI0104 11:30:35.658470 120 log.go:172] (0xc0005d4be0) (1) Data frame handling\nI0104 11:30:35.658478 120 log.go:172] (0xc0005d4be0) (1) Data frame sent\nI0104 11:30:35.658485 120 log.go:172] (0xc0008ba580) (0xc0005d4be0) Stream removed, broadcasting: 1\nI0104 11:30:35.658494 120 log.go:172] (0xc0008ba580) Go away received\nI0104 11:30:35.658966 120 log.go:172] (0xc0008ba580) (0xc0005d4be0) Stream removed, broadcasting: 1\nI0104 11:30:35.658996 120 log.go:172] (0xc0008ba580) (0xc0008b2000) Stream removed, broadcasting: 3\nI0104 11:30:35.659057 120 log.go:172] (0xc0008ba580) (0xc0005d4c80) Stream removed, broadcasting: 5\n" Jan 4 11:30:35.664: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:30:35.664: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 11:30:35.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:30:36.590: INFO: stderr: "I0104 11:30:35.885874 133 log.go:172] (0xc0008f0630) (0xc0008e6960) Create stream\nI0104 11:30:35.886207 133 log.go:172] (0xc0008f0630) (0xc0008e6960) Stream added, broadcasting: 1\nI0104 11:30:35.907318 133 log.go:172] (0xc0008f0630) Reply frame received for 1\nI0104 11:30:35.907402 133 log.go:172] (0xc0008f0630) (0xc0008e6000) Create stream\nI0104 11:30:35.907416 133 log.go:172] (0xc0008f0630) (0xc0008e6000) Stream added, broadcasting: 3\nI0104 11:30:35.914050 133 log.go:172] (0xc0008f0630) Reply frame received for 3\nI0104 11:30:35.914148 133 log.go:172] (0xc0008f0630) (0xc0005c01e0) Create stream\nI0104 11:30:35.914175 133 log.go:172] (0xc0008f0630) (0xc0005c01e0) Stream added, broadcasting: 5\nI0104 11:30:35.916942 133 log.go:172] (0xc0008f0630) Reply frame received for 5\nI0104 11:30:36.254597 133 log.go:172] (0xc0008f0630) Data frame received for 5\nI0104 11:30:36.254652 133 log.go:172] (0xc0005c01e0) (5) Data frame handling\nI0104 11:30:36.254668 133 log.go:172] (0xc0005c01e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 11:30:36.298803 133 log.go:172] (0xc0008f0630) Data frame received for 3\nI0104 11:30:36.299000 133 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0104 11:30:36.299017 133 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0104 11:30:36.580673 133 log.go:172] (0xc0008f0630) (0xc0008e6000) Stream removed, broadcasting: 3\nI0104 11:30:36.581056 133 log.go:172] (0xc0008f0630) Data frame received for 1\nI0104 11:30:36.581172 133 log.go:172] (0xc0008e6960) (1) Data frame handling\nI0104 11:30:36.581237 133 log.go:172] (0xc0008e6960) (1) Data frame sent\nI0104 11:30:36.581292 133 log.go:172] (0xc0008f0630) (0xc0008e6960) Stream removed, broadcasting: 1\nI0104 11:30:36.581893 133 log.go:172] (0xc0008f0630) (0xc0005c01e0) Stream removed, broadcasting: 5\nI0104 11:30:36.581971 133 log.go:172] (0xc0008f0630) (0xc0008e6960) Stream removed, broadcasting: 1\nI0104 11:30:36.582044 133 log.go:172] (0xc0008f0630) (0xc0008e6000) Stream removed, broadcasting: 3\nI0104 11:30:36.582113 133 log.go:172] (0xc0008f0630) (0xc0005c01e0) Stream removed, broadcasting: 5\nI0104 11:30:36.583073 133 log.go:172] (0xc0008f0630) Go away received\n" Jan 4 11:30:36.590: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:30:36.590: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 11:30:36.590: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 11:30:36.602: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 4 11:30:46.619: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 11:30:46.619: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 4 11:30:46.619: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 4 11:30:46.649: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:46.650: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:46.650: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:46.650: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:46.650: INFO: Jan 4 11:30:46.650: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:47.987: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:47.987: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:47.987: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:47.987: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:47.987: INFO: Jan 4 11:30:47.987: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:49.203: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:49.203: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:49.203: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:49.203: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:49.203: INFO: Jan 4 11:30:49.203: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:50.211: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:50.211: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:50.211: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:50.211: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:50.211: INFO: Jan 4 11:30:50.211: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:51.391: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:51.391: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:51.392: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:51.392: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:51.392: INFO: Jan 4 11:30:51.392: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:52.400: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:52.400: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:52.400: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:52.401: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:52.401: INFO: Jan 4 11:30:52.401: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:53.522: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:53.522: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:53.522: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:53.522: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:53.522: INFO: Jan 4 11:30:53.522: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:54.541: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:54.541: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:54.541: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:54.541: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:54.541: INFO: Jan 4 11:30:54.541: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 11:30:55.896: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 11:30:55.896: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:29:48 +0000 UTC }] Jan 4 11:30:55.897: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:55.897: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:30:11 +0000 UTC }] Jan 4 11:30:55.897: INFO: Jan 4 11:30:55.897: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6041 Jan 4 11:30:56.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:30:57.110: INFO: rc: 1 Jan 4 11:30:57.110: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0027c9a70 exit status 1 true [0xc0015ecef8 0xc0015ecf10 0xc0015ecf28] [0xc0015ecef8 0xc0015ecf10 0xc0015ecf28] [0xc0015ecf08 0xc0015ecf20] [0xba6c50 0xba6c50] 0xc002271320 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 4 11:31:07.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:07.239: INFO: rc: 1 Jan 4 11:31:07.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027c9b60 exit status 1 true [0xc0015ecf30 0xc0015ecf48 0xc0015ecf60] [0xc0015ecf30 0xc0015ecf48 0xc0015ecf60] [0xc0015ecf40 0xc0015ecf58] [0xba6c50 0xba6c50] 0xc002271620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:31:17.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:17.390: INFO: rc: 1 Jan 4 11:31:17.390: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002160f00 exit status 1 true [0xc001cf4150 0xc001cf4168 0xc001cf4180] [0xc001cf4150 0xc001cf4168 0xc001cf4180] [0xc001cf4160 0xc001cf4178] [0xba6c50 0xba6c50] 0xc0019272c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:31:27.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:27.495: INFO: rc: 1 Jan 4 11:31:27.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e85530 exit status 1 true [0xc001c09d48 0xc001c09d60 0xc001c09d78] [0xc001c09d48 0xc001c09d60 0xc001c09d78] [0xc001c09d58 0xc001c09d70] [0xba6c50 0xba6c50] 0xc0020ee6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:31:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:37.605: INFO: rc: 1 Jan 4 11:31:37.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002160ff0 exit status 1 true [0xc001cf4188 0xc001cf41a0 0xc001cf41b8] [0xc001cf4188 0xc001cf41a0 0xc001cf41b8] [0xc001cf4198 0xc001cf41b0] [0xba6c50 0xba6c50] 0xc0019275c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:31:47.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:47.751: INFO: rc: 1 Jan 4 11:31:47.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00253b350 exit status 1 true [0xc000705fd8 0xc000705ff0 0xc002d5c008] [0xc000705fd8 0xc000705ff0 0xc002d5c008] [0xc000705fe8 0xc002d5c000] [0xba6c50 0xba6c50] 0xc0027be5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:31:57.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:31:57.918: INFO: rc: 1 Jan 4 11:31:57.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070ecf0 exit status 1 true [0xc000d26020 0xc000d26038 0xc000d26050] [0xc000d26020 0xc000d26038 0xc000d26050] [0xc000d26030 0xc000d26048] [0xba6c50 0xba6c50] 0xc0010c4c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:07.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:08.076: INFO: rc: 1 Jan 4 11:32:08.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2d50 exit status 1 true [0xc001b5a028 0xc001b5a100 0xc001b5a1f0] [0xc001b5a028 0xc001b5a100 0xc001b5a1f0] [0xc001b5a0d0 0xc001b5a170] [0xba6c50 0xba6c50] 0xc0019cde00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:18.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:18.235: INFO: rc: 1 Jan 4 11:32:18.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2e40 exit status 1 true [0xc001b5a208 0xc001b5a2a8 0xc001b5a470] [0xc001b5a208 0xc001b5a2a8 0xc001b5a470] [0xc001b5a250 0xc001b5a3a8] [0xba6c50 0xba6c50] 0xc00145fb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:28.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:28.339: INFO: rc: 1 Jan 4 11:32:28.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2f30 exit status 1 true [0xc001b5a4d8 0xc001b5a5b8 0xc001b5a658] [0xc001b5a4d8 0xc001b5a5b8 0xc001b5a658] [0xc001b5a598 0xc001b5a648] [0xba6c50 0xba6c50] 0xc001a4b1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:38.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:38.584: INFO: rc: 1 Jan 4 11:32:38.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2ff0 exit status 1 true [0xc001b5a6b8 0xc001b5a738 0xc001b5a7d8] [0xc001b5a6b8 0xc001b5a738 0xc001b5a7d8] [0xc001b5a710 0xc001b5a788] [0xba6c50 0xba6c50] 0xc001d600c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:48.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:48.778: INFO: rc: 1 Jan 4 11:32:48.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f30e0 exit status 1 true [0xc001b5a7e0 0xc001b5a840 0xc001b5a948] [0xc001b5a7e0 0xc001b5a840 0xc001b5a948] [0xc001b5a818 0xc001b5a928] [0xba6c50 0xba6c50] 0xc001da4e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:32:58.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:32:58.862: INFO: rc: 1 Jan 4 11:32:58.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070edb0 exit status 1 true [0xc000d26060 0xc000d260a8 0xc000d260d0] [0xc000d26060 0xc000d260a8 0xc000d260d0] [0xc000d26088 0xc000d260c8] [0xba6c50 0xba6c50] 0xc001a12540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:08.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:08.982: INFO: rc: 1 Jan 4 11:33:08.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c090 exit status 1 true [0xc0007e2008 0xc0007e2080 0xc0007e2188] [0xc0007e2008 0xc0007e2080 0xc0007e2188] [0xc0007e2050 0xc0007e2128] [0xba6c50 0xba6c50] 0xc0012b13e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:18.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:19.056: INFO: rc: 1 Jan 4 11:33:19.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016160c0 exit status 1 true [0xc000dd8058 0xc000dd86d8 0xc000dd8a58] [0xc000dd8058 0xc000dd86d8 0xc000dd8a58] [0xc000dd85c0 0xc000dd89d0] [0xba6c50 0xba6c50] 0xc002a0a360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:29.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:29.222: INFO: rc: 1 Jan 4 11:33:29.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016161b0 exit status 1 true [0xc000dd8bc0 0xc000dd8c88 0xc000dd8eb8] [0xc000dd8bc0 0xc000dd8c88 0xc000dd8eb8] [0xc000dd8be8 0xc000dd8df8] [0xba6c50 0xba6c50] 0xc002a0a6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:39.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:39.356: INFO: rc: 1 Jan 4 11:33:39.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c1b0 exit status 1 true [0xc0007e21b8 0xc0007e22b8 0xc0007e23c0] [0xc0007e21b8 0xc0007e22b8 0xc0007e23c0] [0xc0007e2288 0xc0007e2390] [0xba6c50 0xba6c50] 0xc0019162a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:49.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:49.543: INFO: rc: 1 Jan 4 11:33:49.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c270 exit status 1 true [0xc0007e23d8 0xc0007e23f8 0xc0007e2528] [0xc0007e23d8 0xc0007e23f8 0xc0007e2528] [0xc0007e23f0 0xc0007e2520] [0xba6c50 0xba6c50] 0xc001916960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:33:59.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:33:59.700: INFO: rc: 1 Jan 4 11:33:59.700: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070ecc0 exit status 1 true [0xc000d26020 0xc000d26038 0xc000d26050] [0xc000d26020 0xc000d26038 0xc000d26050] [0xc000d26030 0xc000d26048] [0xba6c50 0xba6c50] 0xc0012b13e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:34:09.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:34:09.849: INFO: rc: 1 Jan 4 11:34:09.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c0f0 exit status 1 true [0xc0007e2008 0xc0007e2080 0xc0007e2188] [0xc0007e2008 0xc0007e2080 0xc0007e2188] [0xc0007e2050 0xc0007e2128] [0xba6c50 0xba6c50] 0xc001d60a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:34:19.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:34:20.000: INFO: rc: 1 Jan 4 11:34:20.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001616120 exit status 1 true [0xc000dd8058 0xc000dd86d8 0xc000dd8a58] [0xc000dd8058 0xc000dd86d8 0xc000dd8a58] [0xc000dd85c0 0xc000dd89d0] [0xba6c50 0xba6c50] 0xc001a4a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:34:30.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:34:30.109: INFO: rc: 1 Jan 4 11:34:30.109: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c240 exit status 1 true [0xc0007e21b8 0xc0007e22b8 0xc0007e23c0] [0xc0007e21b8 0xc0007e22b8 0xc0007e23c0] [0xc0007e2288 0xc0007e2390] [0xba6c50 0xba6c50] 0xc0011d6e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:34:40.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:34:40.213: INFO: rc: 1 Jan 4 11:34:40.213: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2d20 exit status 1 true [0xc001b5a028 0xc001b5a100 0xc001b5a1f0] [0xc001b5a028 0xc001b5a100 0xc001b5a1f0] [0xc001b5a0d0 0xc001b5a170] [0xba6c50 0xba6c50] 0xc0013a4180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:34:50.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:34:50.359: INFO: rc: 1 Jan 4 11:34:50.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00134c360 exit status 1 true [0xc0007e23d8 0xc0007e23f8 0xc0007e2528] [0xc0007e23d8 0xc0007e23f8 0xc0007e2528] [0xc0007e23f0 0xc0007e2520] [0xba6c50 0xba6c50] 0xc0013577a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:00.538: INFO: rc: 1 Jan 4 11:35:00.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2ea0 exit status 1 true [0xc001b5a208 0xc001b5a2a8 0xc001b5a470] [0xc001b5a208 0xc001b5a2a8 0xc001b5a470] [0xc001b5a250 0xc001b5a3a8] [0xba6c50 0xba6c50] 0xc001a12540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:10.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:10.741: INFO: rc: 1 Jan 4 11:35:10.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070ee10 exit status 1 true [0xc000d26070 0xc000d260b8 0xc000d260d8] [0xc000d26070 0xc000d260b8 0xc000d260d8] [0xc000d260a8 0xc000d260d0] [0xba6c50 0xba6c50] 0xc001916360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:20.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:20.901: INFO: rc: 1 Jan 4 11:35:20.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021f2fc0 exit status 1 true [0xc001b5a4d8 0xc001b5a5b8 0xc001b5a658] [0xc001b5a4d8 0xc001b5a5b8 0xc001b5a658] [0xc001b5a598 0xc001b5a648] [0xba6c50 0xba6c50] 0xc002a0a180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:30.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:31.041: INFO: rc: 1 Jan 4 11:35:31.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001616210 exit status 1 true [0xc000dd8bc0 0xc000dd8c88 0xc000dd8eb8] [0xc000dd8bc0 0xc000dd8c88 0xc000dd8eb8] [0xc000dd8be8 0xc000dd8df8] [0xba6c50 0xba6c50] 0xc001da5920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:41.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:41.249: INFO: rc: 1 Jan 4 11:35:41.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070eed0 exit status 1 true [0xc000d260e0 0xc000d26120 0xc000d26160] [0xc000d260e0 0xc000d26120 0xc000d26160] [0xc000d26108 0xc000d26150] [0xba6c50 0xba6c50] 0xc001916a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:35:51.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:35:51.392: INFO: rc: 1 Jan 4 11:35:51.392: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070ef90 exit status 1 true [0xc000d26168 0xc000d26180 0xc000d26198] [0xc000d26168 0xc000d26180 0xc000d26198] [0xc000d26178 0xc000d26190] [0xba6c50 0xba6c50] 0xc001917200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 11:36:01.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6041 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:36:01.550: INFO: rc: 1 Jan 4 11:36:01.550: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 4 11:36:01.550: INFO: Scaling statefulset ss to 0 Jan 4 11:36:01.566: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 4 11:36:01.570: INFO: Deleting all statefulset in ns statefulset-6041 Jan 4 11:36:01.575: INFO: Scaling statefulset ss to 0 Jan 4 11:36:01.591: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 11:36:01.594: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:36:01.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6041" for this suite. Jan 4 11:36:07.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:36:07.772: INFO: namespace statefulset-6041 deletion completed in 6.148237614s • [SLOW TEST:379.843 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:36:07.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7154 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7154 STEP: Deleting pre-stop pod Jan 4 11:36:28.993: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:36:29.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7154" for this suite. Jan 4 11:37:07.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:37:07.277: INFO: namespace prestop-7154 deletion completed in 38.236524249s • [SLOW TEST:59.505 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:37:07.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 11:37:16.631: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:37:16.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5287" for this suite. Jan 4 11:37:22.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:37:22.977: INFO: namespace container-runtime-5287 deletion completed in 6.28882869s • [SLOW TEST:15.699 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:37:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 4 11:37:31.213: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 4 11:37:51.385: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:37:51.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6579" for this suite. Jan 4 11:37:57.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:37:57.708: INFO: namespace pods-6579 deletion completed in 6.311982809s • [SLOW TEST:34.731 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:37:57.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0104 11:38:28.423376 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 11:38:28.423: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:38:28.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-523" for this suite. Jan 4 11:38:35.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:38:35.958: INFO: namespace gc-523 deletion completed in 7.530252739s • [SLOW TEST:38.249 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:38:35.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 11:38:46.589: INFO: Waiting up to 5m0s for pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6" in namespace "pods-5843" to be "success or failure" Jan 4 11:38:46.660: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6": Phase="Pending", Reason="", readiness=false. Elapsed: 70.49958ms Jan 4 11:38:48.669: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078900879s Jan 4 11:38:50.675: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085480181s Jan 4 11:38:52.681: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091072473s Jan 4 11:38:54.690: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100109552s STEP: Saw pod success Jan 4 11:38:54.690: INFO: Pod "client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6" satisfied condition "success or failure" Jan 4 11:38:54.694: INFO: Trying to get logs from node iruya-node pod client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6 container env3cont: STEP: delete the pod Jan 4 11:38:54.799: INFO: Waiting for pod client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6 to disappear Jan 4 11:38:54.807: INFO: Pod client-envvars-cde87f5a-7e33-4c0f-8c65-e3a6d9ce39c6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:38:54.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5843" for this suite. Jan 4 11:39:36.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:39:36.963: INFO: namespace pods-5843 deletion completed in 42.151823667s • [SLOW TEST:61.005 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:39:36.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b98a0ffd-6b7f-4c8b-b6cb-4c8c7774a51d STEP: Creating a pod to test consume configMaps Jan 4 11:39:37.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936" in namespace "configmap-658" to be "success or failure" Jan 4 11:39:37.098: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357038ms Jan 4 11:39:39.104: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013256219s Jan 4 11:39:41.112: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02059316s Jan 4 11:39:43.120: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028832295s Jan 4 11:39:45.127: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035436619s STEP: Saw pod success Jan 4 11:39:45.127: INFO: Pod "pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936" satisfied condition "success or failure" Jan 4 11:39:45.131: INFO: Trying to get logs from node iruya-node pod pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936 container configmap-volume-test: STEP: delete the pod Jan 4 11:39:45.276: INFO: Waiting for pod pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936 to disappear Jan 4 11:39:45.285: INFO: Pod pod-configmaps-185ecb56-4a2d-4919-9858-5c9a688f1936 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:39:45.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-658" for this suite. Jan 4 11:39:51.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:39:51.623: INFO: namespace configmap-658 deletion completed in 6.33306816s • [SLOW TEST:14.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:39:51.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3041 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 11:39:51.697: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 11:40:28.102: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3041 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 11:40:28.102: INFO: >>> kubeConfig: /root/.kube/config I0104 11:40:28.181603 8 log.go:172] (0xc000a976b0) (0xc00218a820) Create stream I0104 11:40:28.181648 8 log.go:172] (0xc000a976b0) (0xc00218a820) Stream added, broadcasting: 1 I0104 11:40:28.191024 8 log.go:172] (0xc000a976b0) Reply frame received for 1 I0104 11:40:28.191071 8 log.go:172] (0xc000a976b0) (0xc00046c000) Create stream I0104 11:40:28.191082 8 log.go:172] (0xc000a976b0) (0xc00046c000) Stream added, broadcasting: 3 I0104 11:40:28.193099 8 log.go:172] (0xc000a976b0) Reply frame received for 3 I0104 11:40:28.193119 8 log.go:172] (0xc000a976b0) (0xc00046c640) Create stream I0104 11:40:28.193124 8 log.go:172] (0xc000a976b0) (0xc00046c640) Stream added, broadcasting: 5 I0104 11:40:28.194914 8 log.go:172] (0xc000a976b0) Reply frame received for 5 I0104 11:40:29.363180 8 log.go:172] (0xc000a976b0) Data frame received for 3 I0104 11:40:29.363261 8 log.go:172] (0xc00046c000) (3) Data frame handling I0104 11:40:29.363312 8 log.go:172] (0xc00046c000) (3) Data frame sent I0104 11:40:29.528691 8 log.go:172] (0xc000a976b0) Data frame received for 1 I0104 11:40:29.529108 8 log.go:172] (0xc000a976b0) (0xc00046c000) Stream removed, broadcasting: 3 I0104 11:40:29.529189 8 log.go:172] (0xc00218a820) (1) Data frame handling I0104 11:40:29.529248 8 log.go:172] (0xc00218a820) (1) Data frame sent I0104 11:40:29.529293 8 log.go:172] (0xc000a976b0) (0xc00218a820) Stream removed, broadcasting: 1 I0104 11:40:29.529337 8 log.go:172] (0xc000a976b0) (0xc00046c640) Stream removed, broadcasting: 5 I0104 11:40:29.529409 8 log.go:172] (0xc000a976b0) Go away received I0104 11:40:29.529590 8 log.go:172] (0xc000a976b0) (0xc00218a820) Stream removed, broadcasting: 1 I0104 11:40:29.529786 8 log.go:172] (0xc000a976b0) (0xc00046c000) Stream removed, broadcasting: 3 I0104 11:40:29.529956 8 log.go:172] (0xc000a976b0) (0xc00046c640) Stream removed, broadcasting: 5 Jan 4 11:40:29.530: INFO: Found all expected endpoints: [netserver-0] Jan 4 11:40:29.549: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3041 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 11:40:29.549: INFO: >>> kubeConfig: /root/.kube/config I0104 11:40:29.615491 8 log.go:172] (0xc000d5e2c0) (0xc00218aa00) Create stream I0104 11:40:29.615544 8 log.go:172] (0xc000d5e2c0) (0xc00218aa00) Stream added, broadcasting: 1 I0104 11:40:29.623798 8 log.go:172] (0xc000d5e2c0) Reply frame received for 1 I0104 11:40:29.623841 8 log.go:172] (0xc000d5e2c0) (0xc0007ea0a0) Create stream I0104 11:40:29.623852 8 log.go:172] (0xc000d5e2c0) (0xc0007ea0a0) Stream added, broadcasting: 3 I0104 11:40:29.625938 8 log.go:172] (0xc000d5e2c0) Reply frame received for 3 I0104 11:40:29.626007 8 log.go:172] (0xc000d5e2c0) (0xc00143ea00) Create stream I0104 11:40:29.626029 8 log.go:172] (0xc000d5e2c0) (0xc00143ea00) Stream added, broadcasting: 5 I0104 11:40:29.627519 8 log.go:172] (0xc000d5e2c0) Reply frame received for 5 I0104 11:40:30.745809 8 log.go:172] (0xc000d5e2c0) Data frame received for 3 I0104 11:40:30.746087 8 log.go:172] (0xc0007ea0a0) (3) Data frame handling I0104 11:40:30.746254 8 log.go:172] (0xc0007ea0a0) (3) Data frame sent I0104 11:40:30.893169 8 log.go:172] (0xc000d5e2c0) Data frame received for 1 I0104 11:40:30.893251 8 log.go:172] (0xc00218aa00) (1) Data frame handling I0104 11:40:30.893272 8 log.go:172] (0xc00218aa00) (1) Data frame sent I0104 11:40:30.893287 8 log.go:172] (0xc000d5e2c0) (0xc00218aa00) Stream removed, broadcasting: 1 I0104 11:40:30.893388 8 log.go:172] (0xc000d5e2c0) (0xc0007ea0a0) Stream removed, broadcasting: 3 I0104 11:40:30.893545 8 log.go:172] (0xc000d5e2c0) (0xc00143ea00) Stream removed, broadcasting: 5 I0104 11:40:30.893582 8 log.go:172] (0xc000d5e2c0) (0xc00218aa00) Stream removed, broadcasting: 1 I0104 11:40:30.893593 8 log.go:172] (0xc000d5e2c0) (0xc0007ea0a0) Stream removed, broadcasting: 3 I0104 11:40:30.893614 8 log.go:172] (0xc000d5e2c0) (0xc00143ea00) Stream removed, broadcasting: 5 I0104 11:40:30.893899 8 log.go:172] (0xc000d5e2c0) Go away received Jan 4 11:40:30.894: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:40:30.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3041" for this suite. Jan 4 11:40:54.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:40:55.071: INFO: namespace pod-network-test-3041 deletion completed in 24.168654405s • [SLOW TEST:63.447 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:40:55.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a912a953-cec6-4910-a992-9ea15580af8a STEP: Creating a pod to test consume secrets Jan 4 11:40:55.229: INFO: Waiting up to 5m0s for pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9" in namespace "secrets-6801" to be "success or failure" Jan 4 11:40:55.234: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312949ms Jan 4 11:40:57.246: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016670296s Jan 4 11:40:59.252: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022899372s Jan 4 11:41:01.259: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030249822s Jan 4 11:41:03.267: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037733755s STEP: Saw pod success Jan 4 11:41:03.267: INFO: Pod "pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9" satisfied condition "success or failure" Jan 4 11:41:03.272: INFO: Trying to get logs from node iruya-node pod pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9 container secret-volume-test: STEP: delete the pod Jan 4 11:41:03.316: INFO: Waiting for pod pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9 to disappear Jan 4 11:41:03.329: INFO: Pod pod-secrets-969d7874-d880-405c-a46b-b49cb3b993b9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:41:03.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6801" for this suite. Jan 4 11:41:09.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:41:09.636: INFO: namespace secrets-6801 deletion completed in 6.301809375s • [SLOW TEST:14.564 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:41:09.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 4 11:41:19.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-73f28d05-b0f0-4356-9ed6-8a589a67ddec -c busybox-main-container --namespace=emptydir-4916 -- cat /usr/share/volumeshare/shareddata.txt' Jan 4 11:41:22.310: INFO: stderr: "I0104 11:41:21.846222 711 log.go:172] (0xc000116e70) (0xc00010eb40) Create stream\nI0104 11:41:21.846465 711 log.go:172] (0xc000116e70) (0xc00010eb40) Stream added, broadcasting: 1\nI0104 11:41:21.861070 711 log.go:172] (0xc000116e70) Reply frame received for 1\nI0104 11:41:21.861268 711 log.go:172] (0xc000116e70) (0xc0006c40a0) Create stream\nI0104 11:41:21.861303 711 log.go:172] (0xc000116e70) (0xc0006c40a0) Stream added, broadcasting: 3\nI0104 11:41:21.864870 711 log.go:172] (0xc000116e70) Reply frame received for 3\nI0104 11:41:21.865116 711 log.go:172] (0xc000116e70) (0xc000850000) Create stream\nI0104 11:41:21.865424 711 log.go:172] (0xc000116e70) (0xc000850000) Stream added, broadcasting: 5\nI0104 11:41:21.870962 711 log.go:172] (0xc000116e70) Reply frame received for 5\nI0104 11:41:22.061809 711 log.go:172] (0xc000116e70) Data frame received for 3\nI0104 11:41:22.061979 711 log.go:172] (0xc0006c40a0) (3) Data frame handling\nI0104 11:41:22.062006 711 log.go:172] (0xc0006c40a0) (3) Data frame sent\nI0104 11:41:22.297585 711 log.go:172] (0xc000116e70) (0xc0006c40a0) Stream removed, broadcasting: 3\nI0104 11:41:22.297704 711 log.go:172] (0xc000116e70) Data frame received for 1\nI0104 11:41:22.297727 711 log.go:172] (0xc00010eb40) (1) Data frame handling\nI0104 11:41:22.297757 711 log.go:172] (0xc00010eb40) (1) Data frame sent\nI0104 11:41:22.297777 711 log.go:172] (0xc000116e70) (0xc00010eb40) Stream removed, broadcasting: 1\nI0104 11:41:22.297818 711 log.go:172] (0xc000116e70) (0xc000850000) Stream removed, broadcasting: 5\nI0104 11:41:22.297857 711 log.go:172] (0xc000116e70) Go away received\nI0104 11:41:22.299212 711 log.go:172] (0xc000116e70) (0xc00010eb40) Stream removed, broadcasting: 1\nI0104 11:41:22.299254 711 log.go:172] (0xc000116e70) (0xc0006c40a0) Stream removed, broadcasting: 3\nI0104 11:41:22.299267 711 log.go:172] (0xc000116e70) (0xc000850000) Stream removed, broadcasting: 5\n" Jan 4 11:41:22.310: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:41:22.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4916" for this suite. Jan 4 11:41:28.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:41:28.541: INFO: namespace emptydir-4916 deletion completed in 6.221761984s • [SLOW TEST:18.905 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:41:28.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 11:41:28.660: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 4 11:41:33.675: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 11:41:37.692: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 4 11:41:39.701: INFO: Creating deployment "test-rollover-deployment" Jan 4 11:41:39.718: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 4 11:41:41.734: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 4 11:41:41.745: INFO: Ensure that both replica sets have 1 created replica Jan 4 11:41:41.753: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 4 11:41:41.765: INFO: Updating deployment test-rollover-deployment Jan 4 11:41:41.765: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 4 11:41:43.807: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 4 11:41:43.817: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 4 11:41:43.828: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:43.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:45.862: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:45.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:47.841: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:47.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:49.841: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:49.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:51.838: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:51.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734910, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:53.851: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:53.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734910, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:55.845: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:55.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734910, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:57.851: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:57.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734910, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:41:59.839: INFO: all replica sets need to contain the pod-template-hash label Jan 4 11:41:59.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734910, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713734899, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 11:42:01.842: INFO: Jan 4 11:42:01.842: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 11:42:01.849: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5423,SelfLink:/apis/apps/v1/namespaces/deployment-5423/deployments/test-rollover-deployment,UID:577efcc3-07a8-42ce-9f36-0592c9f15e6c,ResourceVersion:19251934,Generation:2,CreationTimestamp:2020-01-04 11:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 11:41:39 +0000 UTC 2020-01-04 11:41:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 11:42:00 +0000 UTC 2020-01-04 11:41:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 4 11:42:01.855: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5423,SelfLink:/apis/apps/v1/namespaces/deployment-5423/replicasets/test-rollover-deployment-854595fc44,UID:2cee376a-cd26-4c2d-b863-3689bbb055f0,ResourceVersion:19251925,Generation:2,CreationTimestamp:2020-01-04 11:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 577efcc3-07a8-42ce-9f36-0592c9f15e6c 0xc000bb0e67 0xc000bb0e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 4 11:42:01.855: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 4 11:42:01.855: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5423,SelfLink:/apis/apps/v1/namespaces/deployment-5423/replicasets/test-rollover-controller,UID:bc1f24c4-2d75-4468-aa48-bf0094dff830,ResourceVersion:19251933,Generation:2,CreationTimestamp:2020-01-04 11:41:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 577efcc3-07a8-42ce-9f36-0592c9f15e6c 0xc000bb0cd7 0xc000bb0cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 11:42:01.856: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5423,SelfLink:/apis/apps/v1/namespaces/deployment-5423/replicasets/test-rollover-deployment-9b8b997cf,UID:f1b4169c-da0b-476c-a33b-ec7f081a89c1,ResourceVersion:19251884,Generation:2,CreationTimestamp:2020-01-04 11:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 577efcc3-07a8-42ce-9f36-0592c9f15e6c 0xc000bb0fa0 0xc000bb0fa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 11:42:01.862: INFO: Pod "test-rollover-deployment-854595fc44-nssdl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-nssdl,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5423,SelfLink:/api/v1/namespaces/deployment-5423/pods/test-rollover-deployment-854595fc44-nssdl,UID:ae6500df-937d-4a5e-b6c8-41079c1a2ce9,ResourceVersion:19251907,Generation:0,CreationTimestamp:2020-01-04 11:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2cee376a-cd26-4c2d-b863-3689bbb055f0 0xc00212eda7 0xc00212eda8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qrs7w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qrs7w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qrs7w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212ee20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212ee40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:41:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:41:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:41:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:41:42 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 11:41:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 11:41:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a746485230bffae55ee8d93636017b751fe9075525ca57d508f58727be6b42f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:42:01.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5423" for this suite. Jan 4 11:42:07.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:42:07.974: INFO: namespace deployment-5423 deletion completed in 6.107487801s • [SLOW TEST:39.432 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:42:07.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[] Jan 4 11:42:08.198: INFO: Get endpoints failed (16.409162ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 4 11:42:09.209: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[] (1.027684729s elapsed) STEP: Creating pod pod1 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod1:[100]] Jan 4 11:42:13.400: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.150639776s elapsed, will retry) Jan 4 11:42:18.473: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod1:[100]] (9.223710479s elapsed) STEP: Creating pod pod2 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod1:[100] pod2:[101]] Jan 4 11:42:22.845: INFO: Unexpected endpoints: found map[2ad90343-55af-4949-9c35-f25da2722da3:[100]], expected map[pod1:[100] pod2:[101]] (4.365179836s elapsed, will retry) Jan 4 11:42:24.902: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod1:[100] pod2:[101]] (6.422171028s elapsed) STEP: Deleting pod pod1 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod2:[101]] Jan 4 11:42:24.953: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod2:[101]] (38.410317ms elapsed) STEP: Deleting pod pod2 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[] Jan 4 11:42:24.974: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[] (6.045482ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:42:25.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-920" for this suite. Jan 4 11:42:49.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:42:49.250: INFO: namespace services-920 deletion completed in 24.195949436s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.276 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:42:49.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 4 11:42:49.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5696 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 4 11:43:02.764: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0104 11:43:01.215646 746 log.go:172] (0xc000118790) (0xc0001f0320) Create stream\nI0104 11:43:01.216330 746 log.go:172] (0xc000118790) (0xc0001f0320) Stream added, broadcasting: 1\nI0104 11:43:01.238273 746 log.go:172] (0xc000118790) Reply frame received for 1\nI0104 11:43:01.238529 746 log.go:172] (0xc000118790) (0xc000324000) Create stream\nI0104 11:43:01.238609 746 log.go:172] (0xc000118790) (0xc000324000) Stream added, broadcasting: 3\nI0104 11:43:01.246629 746 log.go:172] (0xc000118790) Reply frame received for 3\nI0104 11:43:01.246712 746 log.go:172] (0xc000118790) (0xc0003240a0) Create stream\nI0104 11:43:01.246731 746 log.go:172] (0xc000118790) (0xc0003240a0) Stream added, broadcasting: 5\nI0104 11:43:01.249397 746 log.go:172] (0xc000118790) Reply frame received for 5\nI0104 11:43:01.249477 746 log.go:172] (0xc000118790) (0xc00032a000) Create stream\nI0104 11:43:01.249506 746 log.go:172] (0xc000118790) (0xc00032a000) Stream added, broadcasting: 7\nI0104 11:43:01.251557 746 log.go:172] (0xc000118790) Reply frame received for 7\nI0104 11:43:01.252130 746 log.go:172] (0xc000324000) (3) Writing data frame\nI0104 11:43:01.252555 746 log.go:172] (0xc000324000) (3) Writing data frame\nI0104 11:43:01.272728 746 log.go:172] (0xc000118790) Data frame received for 5\nI0104 11:43:01.272819 746 log.go:172] (0xc0003240a0) (5) Data frame handling\nI0104 11:43:01.272931 746 log.go:172] (0xc0003240a0) (5) Data frame sent\nI0104 11:43:01.278072 746 log.go:172] (0xc000118790) Data frame received for 5\nI0104 11:43:01.278089 746 log.go:172] (0xc0003240a0) (5) Data frame handling\nI0104 11:43:01.278104 746 log.go:172] (0xc0003240a0) (5) Data frame sent\nI0104 11:43:02.726661 746 log.go:172] (0xc000118790) (0xc000324000) Stream removed, broadcasting: 3\nI0104 11:43:02.726903 746 log.go:172] (0xc000118790) Data frame received for 1\nI0104 11:43:02.726990 746 log.go:172] (0xc000118790) (0xc0003240a0) Stream removed, broadcasting: 5\nI0104 11:43:02.727040 746 log.go:172] (0xc0001f0320) (1) Data frame handling\nI0104 11:43:02.727070 746 log.go:172] (0xc0001f0320) (1) Data frame sent\nI0104 11:43:02.727155 746 log.go:172] (0xc000118790) (0xc00032a000) Stream removed, broadcasting: 7\nI0104 11:43:02.727245 746 log.go:172] (0xc000118790) (0xc0001f0320) Stream removed, broadcasting: 1\nI0104 11:43:02.727279 746 log.go:172] (0xc000118790) Go away received\nI0104 11:43:02.727396 746 log.go:172] (0xc000118790) (0xc0001f0320) Stream removed, broadcasting: 1\nI0104 11:43:02.727470 746 log.go:172] (0xc000118790) (0xc000324000) Stream removed, broadcasting: 3\nI0104 11:43:02.727484 746 log.go:172] (0xc000118790) (0xc0003240a0) Stream removed, broadcasting: 5\nI0104 11:43:02.727499 746 log.go:172] (0xc000118790) (0xc00032a000) Stream removed, broadcasting: 7\n" Jan 4 11:43:02.764: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:43:04.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5696" for this suite. Jan 4 11:43:10.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:43:10.934: INFO: namespace kubectl-5696 deletion completed in 6.152555777s • [SLOW TEST:21.683 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:43:10.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 11:43:11.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7" in namespace "downward-api-8422" to be "success or failure" Jan 4 11:43:11.164: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.593593ms Jan 4 11:43:13.171: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019656798s Jan 4 11:43:15.179: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026771513s Jan 4 11:43:17.186: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033850562s Jan 4 11:43:19.232: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080610441s Jan 4 11:43:21.240: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088570017s Jan 4 11:43:23.254: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102046236s STEP: Saw pod success Jan 4 11:43:23.254: INFO: Pod "downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7" satisfied condition "success or failure" Jan 4 11:43:23.258: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7 container client-container: STEP: delete the pod Jan 4 11:43:23.357: INFO: Waiting for pod downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7 to disappear Jan 4 11:43:23.368: INFO: Pod downwardapi-volume-38f4a7d8-445e-414e-b79f-35bb256c1cd7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:43:23.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8422" for this suite. Jan 4 11:43:29.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:43:29.517: INFO: namespace downward-api-8422 deletion completed in 6.144266111s • [SLOW TEST:18.583 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:43:29.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 4 11:43:29.668: INFO: Waiting up to 5m0s for pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf" in namespace "emptydir-3240" to be "success or failure" Jan 4 11:43:29.689: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.484657ms Jan 4 11:43:31.698: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029926251s Jan 4 11:43:33.712: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044146679s Jan 4 11:43:35.719: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051519826s Jan 4 11:43:37.728: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060000537s Jan 4 11:43:39.737: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069102075s Jan 4 11:43:41.745: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.077349778s STEP: Saw pod success Jan 4 11:43:41.745: INFO: Pod "pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf" satisfied condition "success or failure" Jan 4 11:43:41.750: INFO: Trying to get logs from node iruya-node pod pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf container test-container: STEP: delete the pod Jan 4 11:43:41.984: INFO: Waiting for pod pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf to disappear Jan 4 11:43:41.989: INFO: Pod pod-a8e42349-07c0-417b-b935-8cbb14e6b0cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:43:41.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3240" for this suite. Jan 4 11:43:48.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:43:48.266: INFO: namespace emptydir-3240 deletion completed in 6.272072673s • [SLOW TEST:18.749 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:43:48.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 4 11:43:48.338: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 11:43:48.390: INFO: Waiting for terminating namespaces to be deleted... Jan 4 11:43:48.394: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 4 11:43:48.410: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.410: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 11:43:48.410: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 4 11:43:48.410: INFO: Container weave ready: true, restart count 0 Jan 4 11:43:48.410: INFO: Container weave-npc ready: true, restart count 0 Jan 4 11:43:48.410: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 4 11:43:48.420: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container etcd ready: true, restart count 0 Jan 4 11:43:48.420: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 4 11:43:48.420: INFO: Container weave ready: true, restart count 0 Jan 4 11:43:48.420: INFO: Container weave-npc ready: true, restart count 0 Jan 4 11:43:48.420: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container coredns ready: true, restart count 0 Jan 4 11:43:48.420: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container kube-controller-manager ready: true, restart count 17 Jan 4 11:43:48.420: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 11:43:48.420: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container kube-apiserver ready: true, restart count 0 Jan 4 11:43:48.420: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container kube-scheduler ready: true, restart count 12 Jan 4 11:43:48.420: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 11:43:48.420: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 4 11:43:48.587: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 4 11:43:48.587: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-8d505513-ad05-4399-b500-51e89ef62d76.15e6ac5b913b6280], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1320/filler-pod-8d505513-ad05-4399-b500-51e89ef62d76 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d505513-ad05-4399-b500-51e89ef62d76.15e6ac5ccd495fb3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d505513-ad05-4399-b500-51e89ef62d76.15e6ac5db3eceb80], Reason = [Created], Message = [Created container filler-pod-8d505513-ad05-4399-b500-51e89ef62d76] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d505513-ad05-4399-b500-51e89ef62d76.15e6ac5de6c6d630], Reason = [Started], Message = [Started container filler-pod-8d505513-ad05-4399-b500-51e89ef62d76] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc.15e6ac5b941979ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1320/filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc.15e6ac5cec534442], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc.15e6ac5debc3a5a7], Reason = [Created], Message = [Created container filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc.15e6ac5e0a860082], Reason = [Started], Message = [Started container filler-pod-9c003654-8bda-4d9f-9a46-1c6d2d7803cc] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e6ac5e5fa25034], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:44:01.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1320" for this suite. Jan 4 11:44:10.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:44:10.110: INFO: namespace sched-pred-1320 deletion completed in 8.16473466s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.844 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:44:10.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-da93af6c-1ef6-4384-b94f-1d1f35e128bc STEP: Creating a pod to test consume configMaps Jan 4 11:44:10.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843" in namespace "configmap-1967" to be "success or failure" Jan 4 11:44:10.982: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 220.699739ms Jan 4 11:44:13.213: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451783447s Jan 4 11:44:15.243: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481912262s Jan 4 11:44:17.247: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486133501s Jan 4 11:44:19.255: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493564923s Jan 4 11:44:21.265: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 10.503561541s Jan 4 11:44:23.269: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Pending", Reason="", readiness=false. Elapsed: 12.508258264s Jan 4 11:44:25.291: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.530321059s STEP: Saw pod success Jan 4 11:44:25.292: INFO: Pod "pod-configmaps-288f3e69-e351-4196-a04d-253289d92843" satisfied condition "success or failure" Jan 4 11:44:25.302: INFO: Trying to get logs from node iruya-node pod pod-configmaps-288f3e69-e351-4196-a04d-253289d92843 container configmap-volume-test: STEP: delete the pod Jan 4 11:44:25.400: INFO: Waiting for pod pod-configmaps-288f3e69-e351-4196-a04d-253289d92843 to disappear Jan 4 11:44:25.405: INFO: Pod pod-configmaps-288f3e69-e351-4196-a04d-253289d92843 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:44:25.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1967" for this suite. Jan 4 11:44:31.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:44:31.613: INFO: namespace configmap-1967 deletion completed in 6.205057765s • [SLOW TEST:21.503 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:44:31.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 11:44:31.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483" in namespace "projected-4717" to be "success or failure" Jan 4 11:44:31.918: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 92.937628ms Jan 4 11:44:33.932: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106599761s Jan 4 11:44:35.941: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115124057s Jan 4 11:44:37.947: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121521586s Jan 4 11:44:39.954: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128636989s Jan 4 11:44:41.963: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137116355s Jan 4 11:44:43.975: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.149081766s STEP: Saw pod success Jan 4 11:44:43.975: INFO: Pod "downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483" satisfied condition "success or failure" Jan 4 11:44:43.978: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483 container client-container: STEP: delete the pod Jan 4 11:44:44.255: INFO: Waiting for pod downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483 to disappear Jan 4 11:44:44.263: INFO: Pod downwardapi-volume-52212e73-10c4-4807-a74c-2ebeb2183483 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:44:44.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4717" for this suite. Jan 4 11:44:50.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:44:50.497: INFO: namespace projected-4717 deletion completed in 6.218513934s • [SLOW TEST:18.883 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:44:50.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0104 11:45:05.945753 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 11:45:05.945: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:45:05.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4876" for this suite. Jan 4 11:45:24.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:45:24.545: INFO: namespace gc-4876 deletion completed in 17.252094669s • [SLOW TEST:34.048 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:45:24.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 11:45:24.705: INFO: Creating deployment "nginx-deployment" Jan 4 11:45:24.714: INFO: Waiting for observed generation 1 Jan 4 11:45:28.289: INFO: Waiting for all required pods to come up Jan 4 11:45:29.581: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 4 11:46:05.879: INFO: Waiting for deployment "nginx-deployment" to complete Jan 4 11:46:05.889: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 4 11:46:05.900: INFO: Updating deployment nginx-deployment Jan 4 11:46:05.900: INFO: Waiting for observed generation 2 Jan 4 11:46:09.502: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 4 11:46:09.745: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 4 11:46:09.784: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 4 11:46:09.817: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 4 11:46:09.817: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 4 11:46:09.833: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 4 11:46:11.405: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 4 11:46:11.405: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 4 11:46:11.424: INFO: Updating deployment nginx-deployment Jan 4 11:46:11.424: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 4 11:46:13.602: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 4 11:46:13.877: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 11:46:17.480: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2693,SelfLink:/apis/apps/v1/namespaces/deployment-2693/deployments/nginx-deployment,UID:fa5fbbc2-2c3b-438c-8628-c740cbe62a6d,ResourceVersion:19252880,Generation:3,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:28,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-04 11:46:12 +0000 UTC 2020-01-04 11:46:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-04 11:46:14 +0000 UTC 2020-01-04 11:45:24 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 4 11:46:20.116: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2693,SelfLink:/apis/apps/v1/namespaces/deployment-2693/replicasets/nginx-deployment-55fb7cb77f,UID:e80ea0c9-b97c-438e-a466-02ed86610c15,ResourceVersion:19252870,Generation:3,CreationTimestamp:2020-01-04 11:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fa5fbbc2-2c3b-438c-8628-c740cbe62a6d 0xc002e24187 0xc002e24188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 11:46:20.117: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 4 11:46:20.117: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2693,SelfLink:/apis/apps/v1/namespaces/deployment-2693/replicasets/nginx-deployment-7b8c6f4498,UID:00d633f9-1e3f-43db-917e-356083f4c192,ResourceVersion:19252878,Generation:3,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fa5fbbc2-2c3b-438c-8628-c740cbe62a6d 0xc002e24257 0xc002e24258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 4 11:46:22.818: INFO: Pod "nginx-deployment-55fb7cb77f-2fj5q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2fj5q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-2fj5q,UID:a7f6f22d-0e88-4e7e-80a5-7caefa595b07,ResourceVersion:19252866,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc002825c17 0xc002825c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002825c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002825ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.818: INFO: Pod "nginx-deployment-55fb7cb77f-792dr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-792dr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-792dr,UID:adf5fd64-0e19-4a3a-b4cc-94340f705152,ResourceVersion:19252806,Generation:0,CreationTimestamp:2020-01-04 11:46:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc002825d27 0xc002825d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002825d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002825db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 11:46:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.818: INFO: Pod "nginx-deployment-55fb7cb77f-8sn66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8sn66,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-8sn66,UID:d9d085d4-aaff-477a-bfda-878cd3ad5f05,ResourceVersion:19252811,Generation:0,CreationTimestamp:2020-01-04 11:46:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc002825e87 0xc002825e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002825f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002825f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 11:46:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.818: INFO: Pod "nginx-deployment-55fb7cb77f-bfhhg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bfhhg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-bfhhg,UID:530c6539-9cea-470c-be25-23d0931369bb,ResourceVersion:19252859,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc002825ff7 0xc002825ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.819: INFO: Pod "nginx-deployment-55fb7cb77f-bkb6s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bkb6s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-bkb6s,UID:60b98f5d-2ab6-4ed7-939c-90626c77be17,ResourceVersion:19252788,Generation:0,CreationTimestamp:2020-01-04 11:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa107 0xc0026aa108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 11:46:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.819: INFO: Pod "nginx-deployment-55fb7cb77f-bvldz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bvldz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-bvldz,UID:deb2b646-b7fb-4f0e-b6cb-9912944ebd7c,ResourceVersion:19252868,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa277 0xc0026aa278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.819: INFO: Pod "nginx-deployment-55fb7cb77f-djcsc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-djcsc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-djcsc,UID:2dcf5b40-60e9-4126-a21e-6e251afddb76,ResourceVersion:19252861,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa397 0xc0026aa398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.819: INFO: Pod "nginx-deployment-55fb7cb77f-dqck5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dqck5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-dqck5,UID:c6d1339e-c5d1-40c8-8683-d7a8398b9057,ResourceVersion:19252865,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa4b7 0xc0026aa4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.819: INFO: Pod "nginx-deployment-55fb7cb77f-hkmpv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hkmpv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-hkmpv,UID:5175871a-a900-4594-ad21-afa6d2d3ea07,ResourceVersion:19252851,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa5d7 0xc0026aa5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.820: INFO: Pod "nginx-deployment-55fb7cb77f-nrwl4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nrwl4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-nrwl4,UID:155aca5d-8671-4389-b825-703d40577901,ResourceVersion:19252800,Generation:0,CreationTimestamp:2020-01-04 11:46:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa6e7 0xc0026aa6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 11:46:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.820: INFO: Pod "nginx-deployment-55fb7cb77f-sxk95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxk95,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-sxk95,UID:ec18ff79-d8c0-4fef-99d9-d5746806e382,ResourceVersion:19252789,Generation:0,CreationTimestamp:2020-01-04 11:46:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa857 0xc0026aa858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa8c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 11:46:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.822: INFO: Pod "nginx-deployment-55fb7cb77f-wbb5c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wbb5c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-wbb5c,UID:5fd02f22-6e89-4d0e-828d-19d79a53a96e,ResourceVersion:19252890,Generation:0,CreationTimestamp:2020-01-04 11:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aa9b7 0xc0026aa9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aaa20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaa40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 11:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.822: INFO: Pod "nginx-deployment-55fb7cb77f-xfvvz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xfvvz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-55fb7cb77f-xfvvz,UID:1c98c986-718d-4351-a0fb-bf0a7e0eded6,ResourceVersion:19252848,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e80ea0c9-b97c-438e-a466-02ed86610c15 0xc0026aab17 0xc0026aab18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aab90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aabb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.823: INFO: Pod "nginx-deployment-7b8c6f4498-29mw7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-29mw7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-29mw7,UID:53ff5cc2-a17b-445c-adb8-ae09e419d0a7,ResourceVersion:19252731,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026aac37 0xc0026aac38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aacb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aacd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-04 11:45:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:45:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://83b96f8ff216be4a1ba14f65de73c3eb723d91f18308859de61f293566204080}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.823: INFO: Pod "nginx-deployment-7b8c6f4498-2xdpw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2xdpw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-2xdpw,UID:5e955ba8-152e-4a5d-9aa2-8676cc9bc92e,ResourceVersion:19252862,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026aadb7 0xc0026aadb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aae30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aae50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.823: INFO: Pod "nginx-deployment-7b8c6f4498-49r8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-49r8d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-49r8d,UID:b1050c51-36cd-46e0-8e3a-67ec778bd752,ResourceVersion:19252864,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026aaed7 0xc0026aaed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aaf50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.823: INFO: Pod "nginx-deployment-7b8c6f4498-6fv9h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6fv9h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-6fv9h,UID:9c00b8c0-2cb6-4c43-9d0e-ad307206fa5f,ResourceVersion:19252867,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026aaff7 0xc0026aaff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.824: INFO: Pod "nginx-deployment-7b8c6f4498-7ztfc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7ztfc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-7ztfc,UID:063ef600-4019-404a-9599-acb9fd577262,ResourceVersion:19252850,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab117 0xc0026ab118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.824: INFO: Pod "nginx-deployment-7b8c6f4498-9mpzd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9mpzd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-9mpzd,UID:4f10da8f-b041-4db3-a582-dfa3fcbac834,ResourceVersion:19252849,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab247 0xc0026ab248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.824: INFO: Pod "nginx-deployment-7b8c6f4498-bt2pz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bt2pz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-bt2pz,UID:7c19e5af-3259-42fc-a22b-4733576cf93f,ResourceVersion:19252737,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab367 0xc0026ab368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-04 11:45:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:46:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3a2970aed011078e3bbf812bcde6fe2e69f18157c729cdbbe1a63394cfc07b68}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.824: INFO: Pod "nginx-deployment-7b8c6f4498-cw962" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cw962,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-cw962,UID:e586f0fc-f78e-483e-8f3e-b2d18162c49d,ResourceVersion:19252852,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab4d7 0xc0026ab4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.824: INFO: Pod "nginx-deployment-7b8c6f4498-d8fmr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8fmr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-d8fmr,UID:c7978cdd-44bc-4dc2-bb2d-489bc90f765d,ResourceVersion:19252734,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab5e7 0xc0026ab5e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-04 11:45:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:46:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://888516fb146f251ffdf52ccb2c4796dc4b6c15269b3795e94e33c77e8425adb8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.825: INFO: Pod "nginx-deployment-7b8c6f4498-dbm9f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dbm9f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-dbm9f,UID:c9898ab3-01d1-4874-bd36-93ba3f06a9ac,ResourceVersion:19252891,Generation:0,CreationTimestamp:2020-01-04 11:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab757 0xc0026ab758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 11:46:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.825: INFO: Pod "nginx-deployment-7b8c6f4498-f5wlv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f5wlv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-f5wlv,UID:5ad649b7-9195-4bcb-884d-2c9cd5ce0c34,ResourceVersion:19252721,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026ab8d7 0xc0026ab8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-04 11:45:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:46:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8c6dfced23466ec479cab1092c299ccbb1988cdfedbfd9671cac4fc2c9acd6ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.825: INFO: Pod "nginx-deployment-7b8c6f4498-nh97m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nh97m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-nh97m,UID:4f0c42c9-2962-45fc-af63-77fd2f6d8441,ResourceVersion:19252741,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026aba37 0xc0026aba38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 11:45:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:46:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://81eb0007b7c7b98e32853c6cd28b5f176049be10f99909cbeedfec5e52e12474}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.825: INFO: Pod "nginx-deployment-7b8c6f4498-q7phw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q7phw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-q7phw,UID:3b026aeb-d1ce-4cbb-ba19-1d5488f19eb2,ResourceVersion:19252884,Generation:0,CreationTimestamp:2020-01-04 11:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026abba7 0xc0026abba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abc20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abc40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 11:46:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-qfhmd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qfhmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-qfhmd,UID:c43415fd-7fad-44e5-8af5-0abe0cd2dc70,ResourceVersion:19252717,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026abd07 0xc0026abd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abd70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-04 11:45:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:46:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://402a9dc0759d68e8eb71553a4e815d0966c31ca3ffe80d9e799345be0724c948}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-tgtw7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tgtw7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-tgtw7,UID:50082f20-5aa3-47d3-b0de-59772387c979,ResourceVersion:19252702,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026abe67 0xc0026abe68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-04 11:45:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:45:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://56dd2aded734a32a99e29d994c7351d3b58a6c1b9877a700146dbb24d1405e2e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-x5mmg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x5mmg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-x5mmg,UID:2364a734-275d-4c39-b502-5f136c100e62,ResourceVersion:19252869,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc0026abfc7 0xc0026abfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212e040} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212e060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-zd88h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zd88h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-zd88h,UID:1432b955-124d-4463-bf04-20bc5787af21,ResourceVersion:19252704,Generation:0,CreationTimestamp:2020-01-04 11:45:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc00212e0e7 0xc00212e0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212e150} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212e170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:45:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-04 11:45:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 11:45:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c3a04d3a6d849b3302f37b65563d91c6fdabfe09412c70c4e065a10f0bec6eaf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-zfdq4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zfdq4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-zfdq4,UID:aa8058af-445c-40fd-8da5-59a51771864d,ResourceVersion:19252847,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc00212e247 0xc00212e248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212e2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212e2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.826: INFO: Pod "nginx-deployment-7b8c6f4498-zg9z7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zg9z7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-zg9z7,UID:027aeb46-14f8-41a9-91df-fbbd48ccc837,ResourceVersion:19252863,Generation:0,CreationTimestamp:2020-01-04 11:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc00212e357 0xc00212e358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212e3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212e3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 11:46:22.827: INFO: Pod "nginx-deployment-7b8c6f4498-zwghw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwghw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2693,SelfLink:/api/v1/namespaces/deployment-2693/pods/nginx-deployment-7b8c6f4498-zwghw,UID:5fa9bc33-3633-47f3-a8df-853a03b0b201,ResourceVersion:19252885,Generation:0,CreationTimestamp:2020-01-04 11:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 00d633f9-1e3f-43db-917e-356083f4c192 0xc00212e477 0xc00212e478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5vwk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5vwk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5vwk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00212e4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00212e500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 11:46:13 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 11:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:46:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2693" for this suite. Jan 4 11:47:52.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:47:53.581: INFO: namespace deployment-2693 deletion completed in 1m29.9295135s • [SLOW TEST:149.035 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:47:53.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 11:47:53.853: INFO: Waiting up to 5m0s for pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12" in namespace "emptydir-2213" to be "success or failure" Jan 4 11:47:53.960: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 106.595906ms Jan 4 11:47:56.003: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149615456s Jan 4 11:47:58.009: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155765656s Jan 4 11:48:00.030: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176943214s Jan 4 11:48:02.038: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184823356s Jan 4 11:48:04.048: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194604091s Jan 4 11:48:06.059: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 12.20522679s Jan 4 11:48:08.068: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 14.214044789s Jan 4 11:48:10.082: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Pending", Reason="", readiness=false. Elapsed: 16.228351923s Jan 4 11:48:12.088: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.234383178s STEP: Saw pod success Jan 4 11:48:12.088: INFO: Pod "pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12" satisfied condition "success or failure" Jan 4 11:48:12.091: INFO: Trying to get logs from node iruya-node pod pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12 container test-container: STEP: delete the pod Jan 4 11:48:12.140: INFO: Waiting for pod pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12 to disappear Jan 4 11:48:12.145: INFO: Pod pod-b8ea171a-1e06-4d80-b6a1-f70a462b2f12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:48:12.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2213" for this suite. Jan 4 11:48:18.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:48:18.332: INFO: namespace emptydir-2213 deletion completed in 6.181341884s • [SLOW TEST:24.751 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:48:18.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 4 11:48:29.254: INFO: Successfully updated pod "labelsupdate707d0cde-9047-4c1e-b70b-6ac7cbbf8fd5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:48:31.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3433" for this suite. Jan 4 11:48:53.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:48:53.469: INFO: namespace projected-3433 deletion completed in 22.140190414s • [SLOW TEST:35.137 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:48:53.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 11:49:03.965: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:49:04.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8431" for this suite. Jan 4 11:49:10.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:49:10.499: INFO: namespace container-runtime-8431 deletion completed in 6.243588112s • [SLOW TEST:17.029 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:49:10.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 4 11:49:10.746: INFO: Waiting up to 5m0s for pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3" in namespace "emptydir-1639" to be "success or failure" Jan 4 11:49:10.783: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 37.442996ms Jan 4 11:49:12.792: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04592357s Jan 4 11:49:14.803: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057704103s Jan 4 11:49:16.814: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06827695s Jan 4 11:49:18.824: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078689762s Jan 4 11:49:20.830: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084460914s STEP: Saw pod success Jan 4 11:49:20.830: INFO: Pod "pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3" satisfied condition "success or failure" Jan 4 11:49:20.833: INFO: Trying to get logs from node iruya-node pod pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3 container test-container: STEP: delete the pod Jan 4 11:49:20.901: INFO: Waiting for pod pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3 to disappear Jan 4 11:49:20.911: INFO: Pod pod-fbd12626-8a3a-43eb-9aa0-dbc0a57f5ec3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:49:20.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1639" for this suite. Jan 4 11:49:26.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:49:27.025: INFO: namespace emptydir-1639 deletion completed in 6.106542802s • [SLOW TEST:16.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:49:27.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-ade213f7-d81a-43aa-9487-90b5db7ee1a6 STEP: Creating configMap with name cm-test-opt-upd-a46f686b-9b80-4e7d-b09e-10bc18bb4f44 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ade213f7-d81a-43aa-9487-90b5db7ee1a6 STEP: Updating configmap cm-test-opt-upd-a46f686b-9b80-4e7d-b09e-10bc18bb4f44 STEP: Creating configMap with name cm-test-opt-create-68a6e3f9-2dbb-40f8-a763-8a2e8b7495ce STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:51:13.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6761" for this suite. Jan 4 11:51:35.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:51:35.515: INFO: namespace configmap-6761 deletion completed in 22.153324283s • [SLOW TEST:128.490 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:51:35.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 11:51:35.745: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 4 11:51:35.789: INFO: Number of nodes with available pods: 0 Jan 4 11:51:35.789: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:37.783: INFO: Number of nodes with available pods: 0 Jan 4 11:51:37.783: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:38.269: INFO: Number of nodes with available pods: 0 Jan 4 11:51:38.269: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:38.809: INFO: Number of nodes with available pods: 0 Jan 4 11:51:38.809: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:39.941: INFO: Number of nodes with available pods: 0 Jan 4 11:51:39.941: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:40.817: INFO: Number of nodes with available pods: 0 Jan 4 11:51:40.817: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:41.807: INFO: Number of nodes with available pods: 0 Jan 4 11:51:41.807: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:43.207: INFO: Number of nodes with available pods: 0 Jan 4 11:51:43.207: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:43.815: INFO: Number of nodes with available pods: 0 Jan 4 11:51:43.815: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:44.971: INFO: Number of nodes with available pods: 0 Jan 4 11:51:44.971: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:45.868: INFO: Number of nodes with available pods: 0 Jan 4 11:51:45.868: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:51:46.804: INFO: Number of nodes with available pods: 1 Jan 4 11:51:46.804: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 11:51:47.808: INFO: Number of nodes with available pods: 2 Jan 4 11:51:47.808: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 4 11:51:47.908: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:47.908: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:48.975: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:48.975: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:49.986: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:49.986: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:50.980: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:50.980: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:51.978: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:51.978: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:52.982: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:52.983: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:52.983: INFO: Pod daemon-set-svrdx is not available Jan 4 11:51:53.973: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:53.973: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:53.973: INFO: Pod daemon-set-svrdx is not available Jan 4 11:51:54.978: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:54.978: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:54.978: INFO: Pod daemon-set-svrdx is not available Jan 4 11:51:55.976: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:55.976: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:55.976: INFO: Pod daemon-set-svrdx is not available Jan 4 11:51:56.972: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:56.972: INFO: Wrong image for pod: daemon-set-svrdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:56.972: INFO: Pod daemon-set-svrdx is not available Jan 4 11:51:57.999: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:58.000: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:51:58.977: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:58.977: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:51:59.998: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:51:59.998: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:00.977: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:00.977: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:01.991: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:01.991: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:02.984: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:02.984: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:03.977: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:03.977: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:04.976: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:04.976: INFO: Pod daemon-set-z7zh5 is not available Jan 4 11:52:05.972: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:06.982: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:08.015: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:08.971: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:09.973: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:11.002: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:11.973: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:11.973: INFO: Pod daemon-set-lhqvc is not available Jan 4 11:52:12.979: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:12.979: INFO: Pod daemon-set-lhqvc is not available Jan 4 11:52:13.981: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:13.981: INFO: Pod daemon-set-lhqvc is not available Jan 4 11:52:14.976: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:14.976: INFO: Pod daemon-set-lhqvc is not available Jan 4 11:52:15.980: INFO: Wrong image for pod: daemon-set-lhqvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 4 11:52:15.980: INFO: Pod daemon-set-lhqvc is not available Jan 4 11:52:16.979: INFO: Pod daemon-set-5wz87 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 4 11:52:16.993: INFO: Number of nodes with available pods: 1 Jan 4 11:52:16.993: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:18.012: INFO: Number of nodes with available pods: 1 Jan 4 11:52:18.012: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:19.009: INFO: Number of nodes with available pods: 1 Jan 4 11:52:19.009: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:20.015: INFO: Number of nodes with available pods: 1 Jan 4 11:52:20.015: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:21.045: INFO: Number of nodes with available pods: 1 Jan 4 11:52:21.045: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:22.007: INFO: Number of nodes with available pods: 1 Jan 4 11:52:22.007: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:23.008: INFO: Number of nodes with available pods: 1 Jan 4 11:52:23.008: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:24.006: INFO: Number of nodes with available pods: 1 Jan 4 11:52:24.006: INFO: Node iruya-node is running more than one daemon pod Jan 4 11:52:25.013: INFO: Number of nodes with available pods: 2 Jan 4 11:52:25.013: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4621, will wait for the garbage collector to delete the pods Jan 4 11:52:25.128: INFO: Deleting DaemonSet.extensions daemon-set took: 19.012781ms Jan 4 11:52:25.429: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.340645ms Jan 4 11:52:37.936: INFO: Number of nodes with available pods: 0 Jan 4 11:52:37.936: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 11:52:37.941: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4621/daemonsets","resourceVersion":"19253787"},"items":null} Jan 4 11:52:37.945: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4621/pods","resourceVersion":"19253787"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:52:37.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4621" for this suite. Jan 4 11:52:43.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:52:44.119: INFO: namespace daemonsets-4621 deletion completed in 6.148640037s • [SLOW TEST:68.604 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:52:44.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 11:52:44.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2" in namespace "projected-2959" to be "success or failure" Jan 4 11:52:44.200: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83329ms Jan 4 11:52:46.209: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012494994s Jan 4 11:52:48.220: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023095893s Jan 4 11:52:50.225: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028683357s Jan 4 11:52:52.235: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038411225s STEP: Saw pod success Jan 4 11:52:52.235: INFO: Pod "downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2" satisfied condition "success or failure" Jan 4 11:52:52.238: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2 container client-container: STEP: delete the pod Jan 4 11:52:52.335: INFO: Waiting for pod downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2 to disappear Jan 4 11:52:52.395: INFO: Pod downwardapi-volume-c4cf037d-833e-4668-a844-c8c7429785f2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:52:52.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2959" for this suite. Jan 4 11:52:58.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:52:58.614: INFO: namespace projected-2959 deletion completed in 6.208093563s • [SLOW TEST:14.494 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:52:58.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 11:52:58.679: INFO: Creating ReplicaSet my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c Jan 4 11:52:58.768: INFO: Pod name my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c: Found 0 pods out of 1 Jan 4 11:53:03.806: INFO: Pod name my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c: Found 1 pods out of 1 Jan 4 11:53:03.806: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c" is running Jan 4 11:53:07.899: INFO: Pod "my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c-54szn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 11:52:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 11:52:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 11:52:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 11:52:58 +0000 UTC Reason: Message:}]) Jan 4 11:53:07.899: INFO: Trying to dial the pod Jan 4 11:53:12.932: INFO: Controller my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c: Got expected result from replica 1 [my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c-54szn]: "my-hostname-basic-43366cda-4c5a-4d7e-b00e-38dd70806d7c-54szn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:53:12.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8228" for this suite. Jan 4 11:53:18.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:53:19.100: INFO: namespace replicaset-8228 deletion completed in 6.162980756s • [SLOW TEST:20.486 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:53:19.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-972 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 11:53:19.197: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 11:54:07.371: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-972 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 11:54:07.371: INFO: >>> kubeConfig: /root/.kube/config I0104 11:54:07.453897 8 log.go:172] (0xc0002649a0) (0xc001a32140) Create stream I0104 11:54:07.453967 8 log.go:172] (0xc0002649a0) (0xc001a32140) Stream added, broadcasting: 1 I0104 11:54:07.461008 8 log.go:172] (0xc0002649a0) Reply frame received for 1 I0104 11:54:07.461097 8 log.go:172] (0xc0002649a0) (0xc001b463c0) Create stream I0104 11:54:07.461113 8 log.go:172] (0xc0002649a0) (0xc001b463c0) Stream added, broadcasting: 3 I0104 11:54:07.465181 8 log.go:172] (0xc0002649a0) Reply frame received for 3 I0104 11:54:07.465232 8 log.go:172] (0xc0002649a0) (0xc0010a6460) Create stream I0104 11:54:07.465272 8 log.go:172] (0xc0002649a0) (0xc0010a6460) Stream added, broadcasting: 5 I0104 11:54:07.468422 8 log.go:172] (0xc0002649a0) Reply frame received for 5 I0104 11:54:07.857204 8 log.go:172] (0xc0002649a0) Data frame received for 3 I0104 11:54:07.857278 8 log.go:172] (0xc001b463c0) (3) Data frame handling I0104 11:54:07.857315 8 log.go:172] (0xc001b463c0) (3) Data frame sent I0104 11:54:08.039825 8 log.go:172] (0xc0002649a0) Data frame received for 1 I0104 11:54:08.039939 8 log.go:172] (0xc0002649a0) (0xc001b463c0) Stream removed, broadcasting: 3 I0104 11:54:08.039985 8 log.go:172] (0xc001a32140) (1) Data frame handling I0104 11:54:08.040001 8 log.go:172] (0xc001a32140) (1) Data frame sent I0104 11:54:08.040018 8 log.go:172] (0xc0002649a0) (0xc001a32140) Stream removed, broadcasting: 1 I0104 11:54:08.040045 8 log.go:172] (0xc0002649a0) (0xc0010a6460) Stream removed, broadcasting: 5 I0104 11:54:08.040093 8 log.go:172] (0xc0002649a0) (0xc001a32140) Stream removed, broadcasting: 1 I0104 11:54:08.040124 8 log.go:172] (0xc0002649a0) Go away received I0104 11:54:08.040167 8 log.go:172] (0xc0002649a0) (0xc001b463c0) Stream removed, broadcasting: 3 I0104 11:54:08.040189 8 log.go:172] (0xc0002649a0) (0xc0010a6460) Stream removed, broadcasting: 5 Jan 4 11:54:08.040: INFO: Waiting for endpoints: map[] Jan 4 11:54:08.049: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-972 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 11:54:08.049: INFO: >>> kubeConfig: /root/.kube/config I0104 11:54:08.102216 8 log.go:172] (0xc000ce4210) (0xc0002408c0) Create stream I0104 11:54:08.102255 8 log.go:172] (0xc000ce4210) (0xc0002408c0) Stream added, broadcasting: 1 I0104 11:54:08.111391 8 log.go:172] (0xc000ce4210) Reply frame received for 1 I0104 11:54:08.111431 8 log.go:172] (0xc000ce4210) (0xc001b46500) Create stream I0104 11:54:08.111442 8 log.go:172] (0xc000ce4210) (0xc001b46500) Stream added, broadcasting: 3 I0104 11:54:08.112825 8 log.go:172] (0xc000ce4210) Reply frame received for 3 I0104 11:54:08.112847 8 log.go:172] (0xc000ce4210) (0xc000240960) Create stream I0104 11:54:08.112854 8 log.go:172] (0xc000ce4210) (0xc000240960) Stream added, broadcasting: 5 I0104 11:54:08.114636 8 log.go:172] (0xc000ce4210) Reply frame received for 5 I0104 11:54:08.276949 8 log.go:172] (0xc000ce4210) Data frame received for 3 I0104 11:54:08.277035 8 log.go:172] (0xc001b46500) (3) Data frame handling I0104 11:54:08.277074 8 log.go:172] (0xc001b46500) (3) Data frame sent I0104 11:54:08.493263 8 log.go:172] (0xc000ce4210) (0xc001b46500) Stream removed, broadcasting: 3 I0104 11:54:08.493707 8 log.go:172] (0xc000ce4210) Data frame received for 1 I0104 11:54:08.493759 8 log.go:172] (0xc0002408c0) (1) Data frame handling I0104 11:54:08.493797 8 log.go:172] (0xc0002408c0) (1) Data frame sent I0104 11:54:08.493821 8 log.go:172] (0xc000ce4210) (0xc0002408c0) Stream removed, broadcasting: 1 I0104 11:54:08.493884 8 log.go:172] (0xc000ce4210) (0xc000240960) Stream removed, broadcasting: 5 I0104 11:54:08.493918 8 log.go:172] (0xc000ce4210) Go away received I0104 11:54:08.494099 8 log.go:172] (0xc000ce4210) (0xc0002408c0) Stream removed, broadcasting: 1 I0104 11:54:08.494415 8 log.go:172] (0xc000ce4210) (0xc001b46500) Stream removed, broadcasting: 3 I0104 11:54:08.494451 8 log.go:172] (0xc000ce4210) (0xc000240960) Stream removed, broadcasting: 5 Jan 4 11:54:08.494: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:54:08.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-972" for this suite. Jan 4 11:54:32.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:54:32.746: INFO: namespace pod-network-test-972 deletion completed in 24.223728568s • [SLOW TEST:73.645 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:54:32.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 11:54:32.875: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc" in namespace "projected-2438" to be "success or failure" Jan 4 11:54:32.888: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.894312ms Jan 4 11:54:34.900: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024950058s Jan 4 11:54:36.911: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035397876s Jan 4 11:54:38.918: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043066488s Jan 4 11:54:40.925: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04990607s Jan 4 11:54:42.936: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060422468s STEP: Saw pod success Jan 4 11:54:42.936: INFO: Pod "downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc" satisfied condition "success or failure" Jan 4 11:54:42.941: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc container client-container: STEP: delete the pod Jan 4 11:54:43.008: INFO: Waiting for pod downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc to disappear Jan 4 11:54:43.012: INFO: Pod downwardapi-volume-84c97643-07ce-4d13-941e-00454b8f38cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:54:43.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2438" for this suite. Jan 4 11:54:49.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:54:49.154: INFO: namespace projected-2438 deletion completed in 6.136029465s • [SLOW TEST:16.408 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:54:49.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ws9dm in namespace proxy-9179 I0104 11:54:49.353330 8 runners.go:180] Created replication controller with name: proxy-service-ws9dm, namespace: proxy-9179, replica count: 1 I0104 11:54:50.403833 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:51.404136 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:52.404437 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:53.404659 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:54.404826 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:55.405070 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:56.405634 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:57.406003 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:58.406322 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:54:59.406700 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:00.407016 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:01.407375 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:02.407719 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:03.408079 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:04.408430 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:05.408766 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:06.409421 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:55:07.409970 8 runners.go:180] proxy-service-ws9dm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 11:55:07.422: INFO: setup took 18.144085678s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 4 11:55:07.451: INFO: (0) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 28.684015ms) Jan 4 11:55:07.451: INFO: (0) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 28.908259ms) Jan 4 11:55:07.451: INFO: (0) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 29.147681ms) Jan 4 11:55:07.451: INFO: (0) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 29.663701ms) Jan 4 11:55:07.452: INFO: (0) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 29.69884ms) Jan 4 11:55:07.452: INFO: (0) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 30.265556ms) Jan 4 11:55:07.453: INFO: (0) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 31.205885ms) Jan 4 11:55:07.460: INFO: (0) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 37.918689ms) Jan 4 11:55:07.460: INFO: (0) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 38.044323ms) Jan 4 11:55:07.460: INFO: (0) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 38.503503ms) Jan 4 11:55:07.460: INFO: (0) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 38.756584ms) Jan 4 11:55:07.469: INFO: (0) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 46.663994ms) Jan 4 11:55:07.469: INFO: (0) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 10.797238ms) Jan 4 11:55:07.489: INFO: (1) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 14.551691ms) Jan 4 11:55:07.501: INFO: (1) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 26.938853ms) Jan 4 11:55:07.501: INFO: (1) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 29.596889ms) Jan 4 11:55:07.504: INFO: (1) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 29.744135ms) Jan 4 11:55:07.504: INFO: (1) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 29.688122ms) Jan 4 11:55:07.505: INFO: (1) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 30.951513ms) Jan 4 11:55:07.521: INFO: (1) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 46.490468ms) Jan 4 11:55:07.521: INFO: (1) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 46.438968ms) Jan 4 11:55:07.521: INFO: (1) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 46.509038ms) Jan 4 11:55:07.522: INFO: (1) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 47.729751ms) Jan 4 11:55:07.530: INFO: (2) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 7.519977ms) Jan 4 11:55:07.530: INFO: (2) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 7.547374ms) Jan 4 11:55:07.532: INFO: (2) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 9.000678ms) Jan 4 11:55:07.533: INFO: (2) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 9.885154ms) Jan 4 11:55:07.533: INFO: (2) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 10.224467ms) Jan 4 11:55:07.540: INFO: (2) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 17.198845ms) Jan 4 11:55:07.540: INFO: (2) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 17.06756ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 18.076504ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 18.30481ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 18.063424ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 18.23384ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 18.344295ms) Jan 4 11:55:07.541: INFO: (2) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 15.61555ms) Jan 4 11:55:07.561: INFO: (3) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 15.803131ms) Jan 4 11:55:07.561: INFO: (3) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 15.836963ms) Jan 4 11:55:07.561: INFO: (3) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 16.057229ms) Jan 4 11:55:07.562: INFO: (3) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 16.858639ms) Jan 4 11:55:07.562: INFO: (3) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 17.024861ms) Jan 4 11:55:07.562: INFO: (3) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 17.189111ms) Jan 4 11:55:07.562: INFO: (3) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 17.179968ms) Jan 4 11:55:07.562: INFO: (3) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 17.304063ms) Jan 4 11:55:07.563: INFO: (3) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 17.916127ms) Jan 4 11:55:07.563: INFO: (3) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 17.95578ms) Jan 4 11:55:07.564: INFO: (3) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 18.772924ms) Jan 4 11:55:07.575: INFO: (4) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 10.265131ms) Jan 4 11:55:07.576: INFO: (4) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 11.811173ms) Jan 4 11:55:07.576: INFO: (4) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 11.41807ms) Jan 4 11:55:07.576: INFO: (4) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 12.336516ms) Jan 4 11:55:07.577: INFO: (4) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 13.10209ms) Jan 4 11:55:07.577: INFO: (4) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 13.257071ms) Jan 4 11:55:07.577: INFO: (4) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 13.313872ms) Jan 4 11:55:07.577: INFO: (4) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 13.007126ms) Jan 4 11:55:07.578: INFO: (4) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 13.348864ms) Jan 4 11:55:07.578: INFO: (4) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 14.417771ms) Jan 4 11:55:07.578: INFO: (4) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 14.214251ms) Jan 4 11:55:07.579: INFO: (4) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 14.909375ms) Jan 4 11:55:07.581: INFO: (4) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 16.335603ms) Jan 4 11:55:07.581: INFO: (4) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 16.850219ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 14.824829ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 14.74443ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 14.751162ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 14.657399ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 14.840699ms) Jan 4 11:55:07.596: INFO: (5) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 14.819519ms) Jan 4 11:55:07.597: INFO: (5) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 16.553112ms) Jan 4 11:55:07.599: INFO: (5) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 17.965491ms) Jan 4 11:55:07.599: INFO: (5) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 9.194785ms) Jan 4 11:55:07.613: INFO: (6) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 9.724664ms) Jan 4 11:55:07.613: INFO: (6) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 9.736447ms) Jan 4 11:55:07.613: INFO: (6) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 10.263586ms) Jan 4 11:55:07.614: INFO: (6) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 10.597439ms) Jan 4 11:55:07.614: INFO: (6) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 11.305779ms) Jan 4 11:55:07.615: INFO: (6) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test (200; 12.557575ms) Jan 4 11:55:07.616: INFO: (6) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 12.501269ms) Jan 4 11:55:07.616: INFO: (6) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 13.038182ms) Jan 4 11:55:07.616: INFO: (6) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 12.990635ms) Jan 4 11:55:07.616: INFO: (6) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 13.163172ms) Jan 4 11:55:07.624: INFO: (7) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 7.175875ms) Jan 4 11:55:07.624: INFO: (7) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 8.136908ms) Jan 4 11:55:07.624: INFO: (7) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 8.025933ms) Jan 4 11:55:07.625: INFO: (7) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 8.090225ms) Jan 4 11:55:07.625: INFO: (7) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 8.041837ms) Jan 4 11:55:07.625: INFO: (7) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 8.231335ms) Jan 4 11:55:07.625: INFO: (7) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 8.608466ms) Jan 4 11:55:07.645: INFO: (7) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 28.664522ms) Jan 4 11:55:07.650: INFO: (7) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 33.407575ms) Jan 4 11:55:07.650: INFO: (7) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 33.977416ms) Jan 4 11:55:07.651: INFO: (7) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 34.104442ms) Jan 4 11:55:07.651: INFO: (7) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 34.523889ms) Jan 4 11:55:07.651: INFO: (7) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 34.590813ms) Jan 4 11:55:07.651: INFO: (7) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 34.726929ms) Jan 4 11:55:07.651: INFO: (7) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 34.7433ms) Jan 4 11:55:07.657: INFO: (8) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 5.723795ms) Jan 4 11:55:07.658: INFO: (8) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 8.205057ms) Jan 4 11:55:07.660: INFO: (8) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 8.255269ms) Jan 4 11:55:07.661: INFO: (8) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 9.305501ms) Jan 4 11:55:07.661: INFO: (8) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 9.366057ms) Jan 4 11:55:07.661: INFO: (8) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 9.454219ms) Jan 4 11:55:07.661: INFO: (8) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 9.381389ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 12.314356ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 12.352968ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 12.487857ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 12.344081ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 12.745995ms) Jan 4 11:55:07.664: INFO: (8) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 12.817267ms) Jan 4 11:55:07.673: INFO: (9) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 10.560675ms) Jan 4 11:55:07.675: INFO: (9) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 10.74579ms) Jan 4 11:55:07.675: INFO: (9) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 10.581415ms) Jan 4 11:55:07.675: INFO: (9) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 10.69934ms) Jan 4 11:55:07.675: INFO: (9) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 10.9095ms) Jan 4 11:55:07.676: INFO: (9) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 11.411746ms) Jan 4 11:55:07.676: INFO: (9) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 11.415007ms) Jan 4 11:55:07.676: INFO: (9) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 11.42896ms) Jan 4 11:55:07.676: INFO: (9) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 11.81021ms) Jan 4 11:55:07.677: INFO: (9) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 12.691819ms) Jan 4 11:55:07.677: INFO: (9) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 12.934053ms) Jan 4 11:55:07.677: INFO: (9) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 13.290998ms) Jan 4 11:55:07.679: INFO: (9) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 14.975298ms) Jan 4 11:55:07.687: INFO: (10) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 7.516681ms) Jan 4 11:55:07.687: INFO: (10) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 8.103227ms) Jan 4 11:55:07.688: INFO: (10) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 8.582109ms) Jan 4 11:55:07.688: INFO: (10) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 8.738498ms) Jan 4 11:55:07.690: INFO: (10) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 10.221742ms) Jan 4 11:55:07.690: INFO: (10) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 10.543891ms) Jan 4 11:55:07.690: INFO: (10) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 9.574237ms) Jan 4 11:55:07.705: INFO: (11) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 9.734625ms) Jan 4 11:55:07.705: INFO: (11) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 9.737432ms) Jan 4 11:55:07.705: INFO: (11) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 9.897667ms) Jan 4 11:55:07.706: INFO: (11) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 9.979161ms) Jan 4 11:55:07.706: INFO: (11) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 11.648552ms) Jan 4 11:55:07.708: INFO: (11) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 12.004822ms) Jan 4 11:55:07.708: INFO: (11) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 12.292066ms) Jan 4 11:55:07.708: INFO: (11) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 12.39504ms) Jan 4 11:55:07.708: INFO: (11) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 12.488036ms) Jan 4 11:55:07.709: INFO: (11) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 13.90198ms) Jan 4 11:55:07.710: INFO: (11) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 14.542944ms) Jan 4 11:55:07.710: INFO: (11) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 14.57484ms) Jan 4 11:55:07.719: INFO: (12) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 8.457752ms) Jan 4 11:55:07.719: INFO: (12) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 8.745211ms) Jan 4 11:55:07.720: INFO: (12) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test (200; 9.507242ms) Jan 4 11:55:07.721: INFO: (12) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 10.274133ms) Jan 4 11:55:07.721: INFO: (12) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 9.896477ms) Jan 4 11:55:07.721: INFO: (12) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 10.272262ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 11.324708ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 11.891682ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 12.332677ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 12.388578ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 12.610067ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 12.920506ms) Jan 4 11:55:07.723: INFO: (12) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 13.240501ms) Jan 4 11:55:07.731: INFO: (13) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 7.069273ms) Jan 4 11:55:07.731: INFO: (13) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 7.58246ms) Jan 4 11:55:07.732: INFO: (13) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 8.208386ms) Jan 4 11:55:07.734: INFO: (13) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 10.052284ms) Jan 4 11:55:07.734: INFO: (13) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 10.331104ms) Jan 4 11:55:07.735: INFO: (13) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 11.002125ms) Jan 4 11:55:07.735: INFO: (13) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test (200; 4.222701ms) Jan 4 11:55:07.749: INFO: (14) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 8.908577ms) Jan 4 11:55:07.749: INFO: (14) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 9.088668ms) Jan 4 11:55:07.750: INFO: (14) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 10.014267ms) Jan 4 11:55:07.750: INFO: (14) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 10.139659ms) Jan 4 11:55:07.750: INFO: (14) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 10.641868ms) Jan 4 11:55:07.751: INFO: (14) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 10.874584ms) Jan 4 11:55:07.751: INFO: (14) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 11.620221ms) Jan 4 11:55:07.752: INFO: (14) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 11.679598ms) Jan 4 11:55:07.752: INFO: (14) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 11.853447ms) Jan 4 11:55:07.752: INFO: (14) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 13.144936ms) Jan 4 11:55:07.753: INFO: (14) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 13.259193ms) Jan 4 11:55:07.753: INFO: (14) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 13.234826ms) Jan 4 11:55:07.753: INFO: (14) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 13.331787ms) Jan 4 11:55:07.761: INFO: (15) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 6.471196ms) Jan 4 11:55:07.761: INFO: (15) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 6.948583ms) Jan 4 11:55:07.761: INFO: (15) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 8.112549ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 7.483483ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 8.213648ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 7.86178ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 7.92661ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 8.331179ms) Jan 4 11:55:07.762: INFO: (15) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 8.456166ms) Jan 4 11:55:07.764: INFO: (15) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 10.67597ms) Jan 4 11:55:07.765: INFO: (15) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 11.877571ms) Jan 4 11:55:07.766: INFO: (15) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 12.050313ms) Jan 4 11:55:07.766: INFO: (15) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 11.519677ms) Jan 4 11:55:07.766: INFO: (15) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 11.528423ms) Jan 4 11:55:07.766: INFO: (15) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 12.317904ms) Jan 4 11:55:07.772: INFO: (16) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 5.713243ms) Jan 4 11:55:07.772: INFO: (16) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 6.383053ms) Jan 4 11:55:07.773: INFO: (16) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 11.453708ms) Jan 4 11:55:07.778: INFO: (16) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 12.064591ms) Jan 4 11:55:07.780: INFO: (16) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 13.935298ms) Jan 4 11:55:07.783: INFO: (16) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 17.096706ms) Jan 4 11:55:07.784: INFO: (16) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 18.106269ms) Jan 4 11:55:07.785: INFO: (16) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 18.674805ms) Jan 4 11:55:07.785: INFO: (16) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 18.863121ms) Jan 4 11:55:07.786: INFO: (16) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 19.874219ms) Jan 4 11:55:07.787: INFO: (16) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 20.446547ms) Jan 4 11:55:07.788: INFO: (16) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 21.957798ms) Jan 4 11:55:07.788: INFO: (16) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 22.2207ms) Jan 4 11:55:07.789: INFO: (16) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 22.62845ms) Jan 4 11:55:07.789: INFO: (16) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 23.045995ms) Jan 4 11:55:07.805: INFO: (17) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 15.325935ms) Jan 4 11:55:07.805: INFO: (17) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 15.569336ms) Jan 4 11:55:07.805: INFO: (17) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 16.063924ms) Jan 4 11:55:07.806: INFO: (17) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 16.334761ms) Jan 4 11:55:07.806: INFO: (17) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: test<... (200; 17.448612ms) Jan 4 11:55:07.812: INFO: (17) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 23.080203ms) Jan 4 11:55:07.816: INFO: (17) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 27.032307ms) Jan 4 11:55:07.817: INFO: (17) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 27.232457ms) Jan 4 11:55:07.817: INFO: (17) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 27.453383ms) Jan 4 11:55:07.817: INFO: (17) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 27.872396ms) Jan 4 11:55:07.817: INFO: (17) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 27.920574ms) Jan 4 11:55:07.818: INFO: (17) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname2/proxy/: tls qux (200; 28.420229ms) Jan 4 11:55:07.835: INFO: (18) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 16.76899ms) Jan 4 11:55:07.837: INFO: (18) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:460/proxy/: tls baz (200; 18.841578ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 20.496675ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 20.517546ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 20.520193ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:1080/proxy/: ... (200; 20.545275ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 20.607668ms) Jan 4 11:55:07.838: INFO: (18) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 20.696542ms) Jan 4 11:55:07.839: INFO: (18) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: ... (200; 10.195438ms) Jan 4 11:55:07.854: INFO: (19) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname1/proxy/: foo (200; 12.798382ms) Jan 4 11:55:07.854: INFO: (19) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname1/proxy/: foo (200; 13.238959ms) Jan 4 11:55:07.854: INFO: (19) /api/v1/namespaces/proxy-9179/services/http:proxy-service-ws9dm:portname2/proxy/: bar (200; 13.207061ms) Jan 4 11:55:07.856: INFO: (19) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:1080/proxy/: test<... (200; 15.246265ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 15.847947ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/services/proxy-service-ws9dm:portname2/proxy/: bar (200; 15.882706ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/services/https:proxy-service-ws9dm:tlsportname1/proxy/: tls baz (200; 15.894763ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:162/proxy/: bar (200; 16.035086ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/proxy-service-ws9dm-ht58q/proxy/: test (200; 15.912331ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/http:proxy-service-ws9dm-ht58q:160/proxy/: foo (200; 16.207136ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:462/proxy/: tls qux (200; 16.088103ms) Jan 4 11:55:07.857: INFO: (19) /api/v1/namespaces/proxy-9179/pods/https:proxy-service-ws9dm-ht58q:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:56:19.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3570" for this suite. Jan 4 11:56:41.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:56:41.925: INFO: namespace container-probe-3570 deletion completed in 22.14965559s • [SLOW TEST:82.230 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:56:41.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0104 11:56:42.735611 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 11:56:42.735: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:56:42.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5326" for this suite. Jan 4 11:56:48.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:56:48.922: INFO: namespace gc-5326 deletion completed in 6.181714347s • [SLOW TEST:6.996 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:56:48.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 11:56:58.522: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:56:58.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8972" for this suite. Jan 4 11:57:04.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:04.970: INFO: namespace container-runtime-8972 deletion completed in 6.379498118s • [SLOW TEST:16.047 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:57:04.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5cbbffcf-ff96-46fe-8aae-59be0c65ece3 STEP: Creating a pod to test consume configMaps Jan 4 11:57:05.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79" in namespace "projected-8550" to be "success or failure" Jan 4 11:57:05.256: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79": Phase="Pending", Reason="", readiness=false. Elapsed: 96.516592ms Jan 4 11:57:07.262: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102734172s Jan 4 11:57:09.877: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717476883s Jan 4 11:57:11.900: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.740620943s Jan 4 11:57:13.917: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.757350308s STEP: Saw pod success Jan 4 11:57:13.917: INFO: Pod "pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79" satisfied condition "success or failure" Jan 4 11:57:13.920: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79 container projected-configmap-volume-test: STEP: delete the pod Jan 4 11:57:14.022: INFO: Waiting for pod pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79 to disappear Jan 4 11:57:14.030: INFO: Pod pod-projected-configmaps-89d6785b-233a-4d92-bbb7-e6e166b46f79 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:57:14.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8550" for this suite. Jan 4 11:57:20.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:20.257: INFO: namespace projected-8550 deletion completed in 6.221650083s • [SLOW TEST:15.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:57:20.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a6419829-b50b-401d-a592-ad68c3019575 STEP: Creating a pod to test consume secrets Jan 4 11:57:20.478: INFO: Waiting up to 5m0s for pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79" in namespace "secrets-7570" to be "success or failure" Jan 4 11:57:20.570: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 92.595442ms Jan 4 11:57:22.584: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10563485s Jan 4 11:57:24.602: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123994904s Jan 4 11:57:26.617: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139033861s Jan 4 11:57:28.636: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157978558s STEP: Saw pod success Jan 4 11:57:28.636: INFO: Pod "pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79" satisfied condition "success or failure" Jan 4 11:57:28.640: INFO: Trying to get logs from node iruya-node pod pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79 container secret-volume-test: STEP: delete the pod Jan 4 11:57:28.788: INFO: Waiting for pod pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79 to disappear Jan 4 11:57:28.908: INFO: Pod pod-secrets-4cd989f9-4a1f-4f5a-aa06-9a79aa23fb79 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 11:57:28.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7570" for this suite. Jan 4 11:57:34.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:35.056: INFO: namespace secrets-7570 deletion completed in 6.140490367s STEP: Destroying namespace "secret-namespace-2469" for this suite. Jan 4 11:57:41.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:41.509: INFO: namespace secret-namespace-2469 deletion completed in 6.453426997s • [SLOW TEST:21.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 11:57:41.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1173 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 4 11:57:41.739: INFO: Found 0 stateful pods, waiting for 3 Jan 4 11:57:51.751: INFO: Found 1 stateful pods, waiting for 3 Jan 4 11:58:01.756: INFO: Found 2 stateful pods, waiting for 3 Jan 4 11:58:11.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:58:11.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:58:11.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 4 11:58:11.809: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 4 11:58:21.851: INFO: Updating stateful set ss2 Jan 4 11:58:21.905: INFO: Waiting for Pod statefulset-1173/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:58:31.926: INFO: Waiting for Pod statefulset-1173/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 4 11:58:42.275: INFO: Found 2 stateful pods, waiting for 3 Jan 4 11:58:52.283: INFO: Found 2 stateful pods, waiting for 3 Jan 4 11:59:02.290: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:59:02.290: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:59:02.290: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 4 11:59:12.282: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:59:12.282: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:59:12.282: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 4 11:59:12.318: INFO: Updating stateful set ss2 Jan 4 11:59:12.467: INFO: Waiting for Pod statefulset-1173/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:59:22.483: INFO: Waiting for Pod statefulset-1173/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:59:33.084: INFO: Updating stateful set ss2 Jan 4 11:59:33.290: INFO: Waiting for StatefulSet statefulset-1173/ss2 to complete update Jan 4 11:59:33.290: INFO: Waiting for Pod statefulset-1173/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:59:43.310: INFO: Waiting for StatefulSet statefulset-1173/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 4 11:59:53.307: INFO: Deleting all statefulset in ns statefulset-1173 Jan 4 11:59:53.316: INFO: Scaling statefulset ss2 to 0 Jan 4 12:00:23.406: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 12:00:23.410: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:00:23.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1173" for this suite. Jan 4 12:00:31.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:00:31.713: INFO: namespace statefulset-1173 deletion completed in 8.268508587s • [SLOW TEST:170.204 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:00:31.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 12:00:31.857: INFO: Waiting up to 5m0s for pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130" in namespace "emptydir-1309" to be "success or failure" Jan 4 12:00:31.902: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 44.411686ms Jan 4 12:00:33.918: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06029176s Jan 4 12:00:35.965: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107365099s Jan 4 12:00:37.971: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113900707s Jan 4 12:00:39.978: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120896013s Jan 4 12:00:41.985: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127479745s Jan 4 12:00:43.995: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.137755934s STEP: Saw pod success Jan 4 12:00:43.995: INFO: Pod "pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130" satisfied condition "success or failure" Jan 4 12:00:43.999: INFO: Trying to get logs from node iruya-node pod pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130 container test-container: STEP: delete the pod Jan 4 12:00:44.231: INFO: Waiting for pod pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130 to disappear Jan 4 12:00:44.245: INFO: Pod pod-3f93d9ed-4693-49d4-b6ed-0270c6eae130 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:00:44.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1309" for this suite. Jan 4 12:00:50.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:00:50.440: INFO: namespace emptydir-1309 deletion completed in 6.187878037s • [SLOW TEST:18.726 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:00:50.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 4 12:00:50.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1919,SelfLink:/api/v1/namespaces/watch-1919/configmaps/e2e-watch-test-watch-closed,UID:b95e00f1-0525-4cf9-aee3-54d8fdea0986,ResourceVersion:19255090,Generation:0,CreationTimestamp:2020-01-04 12:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 12:00:50.554: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1919,SelfLink:/api/v1/namespaces/watch-1919/configmaps/e2e-watch-test-watch-closed,UID:b95e00f1-0525-4cf9-aee3-54d8fdea0986,ResourceVersion:19255091,Generation:0,CreationTimestamp:2020-01-04 12:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 4 12:00:50.569: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1919,SelfLink:/api/v1/namespaces/watch-1919/configmaps/e2e-watch-test-watch-closed,UID:b95e00f1-0525-4cf9-aee3-54d8fdea0986,ResourceVersion:19255092,Generation:0,CreationTimestamp:2020-01-04 12:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 12:00:50.569: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1919,SelfLink:/api/v1/namespaces/watch-1919/configmaps/e2e-watch-test-watch-closed,UID:b95e00f1-0525-4cf9-aee3-54d8fdea0986,ResourceVersion:19255093,Generation:0,CreationTimestamp:2020-01-04 12:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:00:50.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1919" for this suite. Jan 4 12:00:56.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:00:56.719: INFO: namespace watch-1919 deletion completed in 6.141736085s • [SLOW TEST:6.278 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:00:56.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f848ca2c-442a-4200-85a0-d14d391d1693 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:01:08.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9880" for this suite. Jan 4 12:01:30.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:01:31.120: INFO: namespace configmap-9880 deletion completed in 22.145202327s • [SLOW TEST:34.401 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:01:31.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 4 12:01:31.218: INFO: Waiting up to 5m0s for pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8" in namespace "emptydir-8496" to be "success or failure" Jan 4 12:01:31.241: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.864782ms Jan 4 12:01:33.253: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034799882s Jan 4 12:01:35.266: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048314596s Jan 4 12:01:37.276: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058315542s Jan 4 12:01:39.284: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066335096s Jan 4 12:01:41.291: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073302116s STEP: Saw pod success Jan 4 12:01:41.291: INFO: Pod "pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8" satisfied condition "success or failure" Jan 4 12:01:41.296: INFO: Trying to get logs from node iruya-node pod pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8 container test-container: STEP: delete the pod Jan 4 12:01:41.435: INFO: Waiting for pod pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8 to disappear Jan 4 12:01:41.441: INFO: Pod pod-99f1f6fa-ac91-47e5-911a-5f4cc5bc65d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:01:41.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8496" for this suite. Jan 4 12:01:47.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:01:47.633: INFO: namespace emptydir-8496 deletion completed in 6.184655223s • [SLOW TEST:16.513 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:01:47.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 4 12:01:48.014: INFO: Number of nodes with available pods: 0 Jan 4 12:01:48.014: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:49.931: INFO: Number of nodes with available pods: 0 Jan 4 12:01:49.931: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:50.024: INFO: Number of nodes with available pods: 0 Jan 4 12:01:50.024: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:51.810: INFO: Number of nodes with available pods: 0 Jan 4 12:01:51.810: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:52.150: INFO: Number of nodes with available pods: 0 Jan 4 12:01:52.150: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:53.042: INFO: Number of nodes with available pods: 0 Jan 4 12:01:53.042: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:54.161: INFO: Number of nodes with available pods: 0 Jan 4 12:01:54.161: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:57.877: INFO: Number of nodes with available pods: 0 Jan 4 12:01:57.877: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:58.634: INFO: Number of nodes with available pods: 0 Jan 4 12:01:58.634: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:01:59.029: INFO: Number of nodes with available pods: 0 Jan 4 12:01:59.029: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:00.738: INFO: Number of nodes with available pods: 0 Jan 4 12:02:00.738: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:01.131: INFO: Number of nodes with available pods: 0 Jan 4 12:02:01.131: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:02.062: INFO: Number of nodes with available pods: 2 Jan 4 12:02:02.062: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 4 12:02:02.316: INFO: Number of nodes with available pods: 1 Jan 4 12:02:02.316: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:03.328: INFO: Number of nodes with available pods: 1 Jan 4 12:02:03.328: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:04.330: INFO: Number of nodes with available pods: 1 Jan 4 12:02:04.330: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:05.326: INFO: Number of nodes with available pods: 1 Jan 4 12:02:05.326: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:06.333: INFO: Number of nodes with available pods: 1 Jan 4 12:02:06.333: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:07.327: INFO: Number of nodes with available pods: 1 Jan 4 12:02:07.327: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:08.343: INFO: Number of nodes with available pods: 1 Jan 4 12:02:08.343: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:09.335: INFO: Number of nodes with available pods: 1 Jan 4 12:02:09.335: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:10.376: INFO: Number of nodes with available pods: 1 Jan 4 12:02:10.376: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:11.332: INFO: Number of nodes with available pods: 1 Jan 4 12:02:11.332: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:12.336: INFO: Number of nodes with available pods: 1 Jan 4 12:02:12.336: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:13.328: INFO: Number of nodes with available pods: 1 Jan 4 12:02:13.328: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:14.333: INFO: Number of nodes with available pods: 1 Jan 4 12:02:14.333: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:15.329: INFO: Number of nodes with available pods: 1 Jan 4 12:02:15.329: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:16.340: INFO: Number of nodes with available pods: 1 Jan 4 12:02:16.341: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:17.436: INFO: Number of nodes with available pods: 1 Jan 4 12:02:17.436: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:18.339: INFO: Number of nodes with available pods: 1 Jan 4 12:02:18.339: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:02:19.406: INFO: Number of nodes with available pods: 2 Jan 4 12:02:19.406: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3078, will wait for the garbage collector to delete the pods Jan 4 12:02:19.504: INFO: Deleting DaemonSet.extensions daemon-set took: 29.005233ms Jan 4 12:02:19.804: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.604914ms Jan 4 12:02:38.016: INFO: Number of nodes with available pods: 0 Jan 4 12:02:38.016: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 12:02:38.023: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3078/daemonsets","resourceVersion":"19255348"},"items":null} Jan 4 12:02:38.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3078/pods","resourceVersion":"19255348"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:02:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3078" for this suite. Jan 4 12:02:46.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:02:46.247: INFO: namespace daemonsets-3078 deletion completed in 8.142980864s • [SLOW TEST:58.613 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:02:46.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a1b8417a-1e0a-40dd-87cb-88b7f279f00e STEP: Creating a pod to test consume secrets Jan 4 12:02:46.603: INFO: Waiting up to 5m0s for pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089" in namespace "secrets-8316" to be "success or failure" Jan 4 12:02:46.615: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 11.815439ms Jan 4 12:02:48.630: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026860633s Jan 4 12:02:50.645: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041338432s Jan 4 12:02:52.655: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051338271s Jan 4 12:02:54.666: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06229519s Jan 4 12:02:56.673: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069114181s Jan 4 12:02:58.688: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084629572s STEP: Saw pod success Jan 4 12:02:58.688: INFO: Pod "pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089" satisfied condition "success or failure" Jan 4 12:02:58.692: INFO: Trying to get logs from node iruya-node pod pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089 container secret-volume-test: STEP: delete the pod Jan 4 12:02:58.792: INFO: Waiting for pod pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089 to disappear Jan 4 12:02:58.800: INFO: Pod pod-secrets-2c74c218-88b7-4ab1-91c5-e62c8ce19089 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:02:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8316" for this suite. Jan 4 12:03:04.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:05.018: INFO: namespace secrets-8316 deletion completed in 6.21323752s • [SLOW TEST:18.771 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:03:05.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 4 12:03:05.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5529' Jan 4 12:03:07.026: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 12:03:07.026: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 4 12:03:07.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5529' Jan 4 12:03:07.159: INFO: stderr: "" Jan 4 12:03:07.159: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:03:07.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5529" for this suite. Jan 4 12:03:13.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:13.353: INFO: namespace kubectl-5529 deletion completed in 6.183661461s • [SLOW TEST:8.335 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:03:13.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:03:18.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7353" for this suite. Jan 4 12:03:25.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:25.203: INFO: namespace watch-7353 deletion completed in 6.228697449s • [SLOW TEST:11.850 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:03:25.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 12:03:25.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4" in namespace "downward-api-6230" to be "success or failure" Jan 4 12:03:25.342: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.543999ms Jan 4 12:03:27.464: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150182395s Jan 4 12:03:29.485: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171314681s Jan 4 12:03:31.526: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211918341s Jan 4 12:03:33.537: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.223236871s STEP: Saw pod success Jan 4 12:03:33.537: INFO: Pod "downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4" satisfied condition "success or failure" Jan 4 12:03:33.546: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4 container client-container: STEP: delete the pod Jan 4 12:03:33.596: INFO: Waiting for pod downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4 to disappear Jan 4 12:03:33.600: INFO: Pod downwardapi-volume-c3924b86-b5b1-4106-bd9e-84904f66c8e4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:03:33.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6230" for this suite. Jan 4 12:03:39.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:39.823: INFO: namespace downward-api-6230 deletion completed in 6.218374654s • [SLOW TEST:14.619 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:03:39.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1785/configmap-test-bd760149-e126-42c9-a395-13b739a07f08 STEP: Creating a pod to test consume configMaps Jan 4 12:03:39.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae" in namespace "configmap-1785" to be "success or failure" Jan 4 12:03:39.943: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae": Phase="Pending", Reason="", readiness=false. Elapsed: 44.902205ms Jan 4 12:03:41.956: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058063937s Jan 4 12:03:43.966: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068239562s Jan 4 12:03:45.976: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078695714s Jan 4 12:03:48.004: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106755806s STEP: Saw pod success Jan 4 12:03:48.005: INFO: Pod "pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae" satisfied condition "success or failure" Jan 4 12:03:48.011: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae container env-test: STEP: delete the pod Jan 4 12:03:48.115: INFO: Waiting for pod pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae to disappear Jan 4 12:03:48.160: INFO: Pod pod-configmaps-b9c017ff-03d5-43bc-92f5-8c3210831fae no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:03:48.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1785" for this suite. Jan 4 12:03:54.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:54.322: INFO: namespace configmap-1785 deletion completed in 6.155320327s • [SLOW TEST:14.499 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:03:54.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:03:54.412: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:04:04.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-935" for this suite. Jan 4 12:04:48.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:04:48.692: INFO: namespace pods-935 deletion completed in 44.195698524s • [SLOW TEST:54.370 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:04:48.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 4 12:04:48.786: INFO: Waiting up to 5m0s for pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46" in namespace "downward-api-1477" to be "success or failure" Jan 4 12:04:48.790: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531974ms Jan 4 12:04:50.797: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011392218s Jan 4 12:04:52.804: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018457169s Jan 4 12:04:54.816: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030294683s Jan 4 12:04:56.827: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041488895s STEP: Saw pod success Jan 4 12:04:56.827: INFO: Pod "downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46" satisfied condition "success or failure" Jan 4 12:04:56.832: INFO: Trying to get logs from node iruya-node pod downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46 container dapi-container: STEP: delete the pod Jan 4 12:04:56.928: INFO: Waiting for pod downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46 to disappear Jan 4 12:04:56.942: INFO: Pod downward-api-907f50dd-9c15-4209-8236-cb1ddbb43d46 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:04:56.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1477" for this suite. Jan 4 12:05:03.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:05:03.138: INFO: namespace downward-api-1477 deletion completed in 6.18435883s • [SLOW TEST:14.446 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:05:03.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d29a25e8-82b5-40f2-a825-a8244c94134a STEP: Creating a pod to test consume configMaps Jan 4 12:05:03.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d" in namespace "projected-7366" to be "success or failure" Jan 4 12:05:03.371: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.991375ms Jan 4 12:05:05.386: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029085955s Jan 4 12:05:07.395: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038055752s Jan 4 12:05:09.407: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050474337s Jan 4 12:05:11.413: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056751703s Jan 4 12:05:13.424: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067287677s STEP: Saw pod success Jan 4 12:05:13.424: INFO: Pod "pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d" satisfied condition "success or failure" Jan 4 12:05:13.431: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d container projected-configmap-volume-test: STEP: delete the pod Jan 4 12:05:13.572: INFO: Waiting for pod pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d to disappear Jan 4 12:05:13.581: INFO: Pod pod-projected-configmaps-788cd208-8a35-41e9-90fa-9751977d7b5d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:05:13.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7366" for this suite. Jan 4 12:05:19.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:05:19.727: INFO: namespace projected-7366 deletion completed in 6.130909757s • [SLOW TEST:16.588 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:05:19.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 12:05:19.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25" in namespace "projected-2927" to be "success or failure" Jan 4 12:05:19.871: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Pending", Reason="", readiness=false. Elapsed: 16.208063ms Jan 4 12:05:21.884: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029314315s Jan 4 12:05:23.896: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041623995s Jan 4 12:05:25.908: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053084464s Jan 4 12:05:27.918: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063328613s Jan 4 12:05:29.936: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081348608s STEP: Saw pod success Jan 4 12:05:29.936: INFO: Pod "downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25" satisfied condition "success or failure" Jan 4 12:05:29.942: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25 container client-container: STEP: delete the pod Jan 4 12:05:30.007: INFO: Waiting for pod downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25 to disappear Jan 4 12:05:30.043: INFO: Pod downwardapi-volume-fc5cca3f-06e9-4999-892b-f45e4e048b25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:05:30.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2927" for this suite. Jan 4 12:05:36.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:05:36.240: INFO: namespace projected-2927 deletion completed in 6.18607764s • [SLOW TEST:16.513 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:05:36.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-6jml STEP: Creating a pod to test atomic-volume-subpath Jan 4 12:05:37.373: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6jml" in namespace "subpath-894" to be "success or failure" Jan 4 12:05:37.379: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351626ms Jan 4 12:05:39.389: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015757932s Jan 4 12:05:41.420: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047058564s Jan 4 12:05:43.435: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062301424s Jan 4 12:05:45.440: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067416748s Jan 4 12:05:47.456: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 10.082992777s Jan 4 12:05:49.467: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 12.094446382s Jan 4 12:05:51.473: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 14.099930317s Jan 4 12:05:53.483: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 16.110263308s Jan 4 12:05:55.491: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 18.118124102s Jan 4 12:05:57.499: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 20.12648396s Jan 4 12:05:59.508: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 22.134814164s Jan 4 12:06:01.515: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 24.142070693s Jan 4 12:06:03.527: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 26.153779727s Jan 4 12:06:05.534: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Running", Reason="", readiness=true. Elapsed: 28.160976568s Jan 4 12:06:07.544: INFO: Pod "pod-subpath-test-configmap-6jml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.171358576s STEP: Saw pod success Jan 4 12:06:07.544: INFO: Pod "pod-subpath-test-configmap-6jml" satisfied condition "success or failure" Jan 4 12:06:07.549: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-6jml container test-container-subpath-configmap-6jml: STEP: delete the pod Jan 4 12:06:07.782: INFO: Waiting for pod pod-subpath-test-configmap-6jml to disappear Jan 4 12:06:07.850: INFO: Pod pod-subpath-test-configmap-6jml no longer exists STEP: Deleting pod pod-subpath-test-configmap-6jml Jan 4 12:06:07.850: INFO: Deleting pod "pod-subpath-test-configmap-6jml" in namespace "subpath-894" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:06:07.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-894" for this suite. Jan 4 12:06:13.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:06:14.109: INFO: namespace subpath-894 deletion completed in 6.195851516s • [SLOW TEST:37.869 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:06:14.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 4 12:06:14.280: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19255996,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 12:06:14.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19255997,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 4 12:06:14.280: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19255998,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 4 12:06:24.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19256013,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 12:06:24.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19256014,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 4 12:06:24.364: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9886,SelfLink:/api/v1/namespaces/watch-9886/configmaps/e2e-watch-test-label-changed,UID:b9acc1d8-d16d-4cdc-ab37-6b4d234e09dd,ResourceVersion:19256015,Generation:0,CreationTimestamp:2020-01-04 12:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:06:24.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9886" for this suite. Jan 4 12:06:30.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:06:30.502: INFO: namespace watch-9886 deletion completed in 6.120751569s • [SLOW TEST:16.391 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:06:30.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 4 12:06:30.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-668' Jan 4 12:06:30.806: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 12:06:30.806: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 4 12:06:30.811: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 4 12:06:30.830: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 4 12:06:30.902: INFO: scanned /root for discovery docs: Jan 4 12:06:30.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-668' Jan 4 12:06:56.279: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 4 12:06:56.279: INFO: stdout: "Created e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd\nScaling up e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 4 12:06:56.279: INFO: stdout: "Created e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd\nScaling up e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 4 12:06:56.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-668' Jan 4 12:06:56.478: INFO: stderr: "" Jan 4 12:06:56.478: INFO: stdout: "e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd-tqptq e2e-test-nginx-rc-q52vb " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 4 12:07:01.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-668' Jan 4 12:07:01.682: INFO: stderr: "" Jan 4 12:07:01.682: INFO: stdout: "e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd-tqptq " Jan 4 12:07:01.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd-tqptq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-668' Jan 4 12:07:01.903: INFO: stderr: "" Jan 4 12:07:01.903: INFO: stdout: "true" Jan 4 12:07:01.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd-tqptq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-668' Jan 4 12:07:02.074: INFO: stderr: "" Jan 4 12:07:02.074: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 4 12:07:02.074: INFO: e2e-test-nginx-rc-1b70e1a5487703cdff0130efa57667bd-tqptq is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 4 12:07:02.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-668' Jan 4 12:07:02.246: INFO: stderr: "" Jan 4 12:07:02.247: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:07:02.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-668" for this suite. Jan 4 12:07:14.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:07:14.377: INFO: namespace kubectl-668 deletion completed in 12.123181341s • [SLOW TEST:43.875 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:07:14.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-1da4706d-5396-4335-b842-4d99da0567cb STEP: Creating secret with name s-test-opt-upd-9b2b4633-894c-4630-98ce-b4dc88ac55c5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1da4706d-5396-4335-b842-4d99da0567cb STEP: Updating secret s-test-opt-upd-9b2b4633-894c-4630-98ce-b4dc88ac55c5 STEP: Creating secret with name s-test-opt-create-b68c2b79-ec6b-4029-912e-36b908061049 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:07:33.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1025" for this suite. Jan 4 12:08:13.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:08:13.313: INFO: namespace projected-1025 deletion completed in 40.195203755s • [SLOW TEST:58.936 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:08:13.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-273/secret-test-a73e9196-c367-45e6-9da2-cf8ed3f51bfd STEP: Creating a pod to test consume secrets Jan 4 12:08:13.475: INFO: Waiting up to 5m0s for pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26" in namespace "secrets-273" to be "success or failure" Jan 4 12:08:13.514: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 39.215942ms Jan 4 12:08:15.523: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048505735s Jan 4 12:08:17.547: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072486223s Jan 4 12:08:19.556: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081032552s Jan 4 12:08:21.576: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101161723s Jan 4 12:08:23.587: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111904329s Jan 4 12:08:25.595: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.120235037s STEP: Saw pod success Jan 4 12:08:25.595: INFO: Pod "pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26" satisfied condition "success or failure" Jan 4 12:08:25.598: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26 container env-test: STEP: delete the pod Jan 4 12:08:25.645: INFO: Waiting for pod pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26 to disappear Jan 4 12:08:25.654: INFO: Pod pod-configmaps-f66207ae-8ced-4741-8e43-0886fca09d26 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:08:25.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-273" for this suite. Jan 4 12:08:31.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:08:31.912: INFO: namespace secrets-273 deletion completed in 6.252738582s • [SLOW TEST:18.598 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:08:31.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:08:42.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6943" for this suite. Jan 4 12:08:48.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:08:48.435: INFO: namespace emptydir-wrapper-6943 deletion completed in 6.302553769s • [SLOW TEST:16.523 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:08:48.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 4 12:08:48.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7413' Jan 4 12:08:48.930: INFO: stderr: "" Jan 4 12:08:48.930: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 4 12:08:49.943: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:49.943: INFO: Found 0 / 1 Jan 4 12:08:50.936: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:50.936: INFO: Found 0 / 1 Jan 4 12:08:51.939: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:51.939: INFO: Found 0 / 1 Jan 4 12:08:52.937: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:52.937: INFO: Found 0 / 1 Jan 4 12:08:53.938: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:53.938: INFO: Found 0 / 1 Jan 4 12:08:54.937: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:54.937: INFO: Found 0 / 1 Jan 4 12:08:55.936: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:55.936: INFO: Found 0 / 1 Jan 4 12:08:56.946: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:56.946: INFO: Found 0 / 1 Jan 4 12:08:57.938: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:57.938: INFO: Found 1 / 1 Jan 4 12:08:57.938: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 4 12:08:57.943: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:08:57.943: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 4 12:08:57.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413' Jan 4 12:08:58.204: INFO: stderr: "" Jan 4 12:08:58.204: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jan 12:08:57.090 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 12:08:57.090 # Server started, Redis version 3.2.12\n1:M 04 Jan 12:08:57.090 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 12:08:57.090 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 4 12:08:58.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413 --tail=1' Jan 4 12:08:58.400: INFO: stderr: "" Jan 4 12:08:58.400: INFO: stdout: "1:M 04 Jan 12:08:57.090 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 4 12:08:58.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413 --limit-bytes=1' Jan 4 12:08:58.537: INFO: stderr: "" Jan 4 12:08:58.538: INFO: stdout: " " STEP: exposing timestamps Jan 4 12:08:58.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413 --tail=1 --timestamps' Jan 4 12:08:58.630: INFO: stderr: "" Jan 4 12:08:58.630: INFO: stdout: "2020-01-04T12:08:57.091711883Z 1:M 04 Jan 12:08:57.090 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 4 12:09:01.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413 --since=1s' Jan 4 12:09:01.339: INFO: stderr: "" Jan 4 12:09:01.339: INFO: stdout: "" Jan 4 12:09:01.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x6t2n redis-master --namespace=kubectl-7413 --since=24h' Jan 4 12:09:01.495: INFO: stderr: "" Jan 4 12:09:01.495: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jan 12:08:57.090 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 12:08:57.090 # Server started, Redis version 3.2.12\n1:M 04 Jan 12:08:57.090 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 12:08:57.090 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 4 12:09:01.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7413' Jan 4 12:09:01.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:09:01.648: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 4 12:09:01.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7413' Jan 4 12:09:01.793: INFO: stderr: "No resources found.\n" Jan 4 12:09:01.793: INFO: stdout: "" Jan 4 12:09:01.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7413 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 12:09:01.939: INFO: stderr: "" Jan 4 12:09:01.939: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:09:01.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7413" for this suite. Jan 4 12:09:23.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:09:24.218: INFO: namespace kubectl-7413 deletion completed in 22.275342463s • [SLOW TEST:35.782 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:09:24.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 12:09:24.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65" in namespace "downward-api-3526" to be "success or failure" Jan 4 12:09:24.361: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Pending", Reason="", readiness=false. Elapsed: 55.623369ms Jan 4 12:09:26.380: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074234192s Jan 4 12:09:28.386: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080514944s Jan 4 12:09:30.396: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090957694s Jan 4 12:09:32.429: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Running", Reason="", readiness=true. Elapsed: 8.123835507s Jan 4 12:09:34.436: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.130012581s STEP: Saw pod success Jan 4 12:09:34.436: INFO: Pod "downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65" satisfied condition "success or failure" Jan 4 12:09:34.439: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65 container client-container: STEP: delete the pod Jan 4 12:09:34.508: INFO: Waiting for pod downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65 to disappear Jan 4 12:09:34.628: INFO: Pod downwardapi-volume-448b12b6-9133-452f-8575-fcb3b09efd65 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:09:34.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3526" for this suite. Jan 4 12:09:40.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:09:40.789: INFO: namespace downward-api-3526 deletion completed in 6.150901823s • [SLOW TEST:16.571 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:09:40.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 4 12:09:40.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7335' Jan 4 12:09:41.048: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 12:09:41.048: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 4 12:09:45.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7335' Jan 4 12:09:45.243: INFO: stderr: "" Jan 4 12:09:45.243: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:09:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7335" for this suite. Jan 4 12:09:51.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:09:51.475: INFO: namespace kubectl-7335 deletion completed in 6.227585441s • [SLOW TEST:10.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:09:51.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-jnmh STEP: Creating a pod to test atomic-volume-subpath Jan 4 12:09:51.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jnmh" in namespace "subpath-6736" to be "success or failure" Jan 4 12:09:51.622: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365355ms Jan 4 12:09:53.638: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02459968s Jan 4 12:09:55.652: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038988535s Jan 4 12:09:57.663: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04948326s Jan 4 12:09:59.671: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 8.057500141s Jan 4 12:10:01.678: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 10.064758918s Jan 4 12:10:03.687: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 12.073819774s Jan 4 12:10:05.700: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 14.08696609s Jan 4 12:10:07.722: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 16.108402419s Jan 4 12:10:09.731: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 18.117663396s Jan 4 12:10:11.744: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 20.131187011s Jan 4 12:10:13.762: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 22.148355876s Jan 4 12:10:15.785: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 24.171847147s Jan 4 12:10:17.798: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 26.185010998s Jan 4 12:10:19.813: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Running", Reason="", readiness=true. Elapsed: 28.20016347s Jan 4 12:10:21.828: INFO: Pod "pod-subpath-test-secret-jnmh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.214695705s STEP: Saw pod success Jan 4 12:10:21.828: INFO: Pod "pod-subpath-test-secret-jnmh" satisfied condition "success or failure" Jan 4 12:10:21.833: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-jnmh container test-container-subpath-secret-jnmh: STEP: delete the pod Jan 4 12:10:22.406: INFO: Waiting for pod pod-subpath-test-secret-jnmh to disappear Jan 4 12:10:22.414: INFO: Pod pod-subpath-test-secret-jnmh no longer exists STEP: Deleting pod pod-subpath-test-secret-jnmh Jan 4 12:10:22.414: INFO: Deleting pod "pod-subpath-test-secret-jnmh" in namespace "subpath-6736" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:10:22.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6736" for this suite. Jan 4 12:10:28.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:10:28.710: INFO: namespace subpath-6736 deletion completed in 6.284737987s • [SLOW TEST:37.234 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:10:28.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 4 12:10:31.258: INFO: Pod name wrapped-volume-race-905eb5d2-cdee-416f-87ef-5d718d6149c9: Found 0 pods out of 5 Jan 4 12:10:36.280: INFO: Pod name wrapped-volume-race-905eb5d2-cdee-416f-87ef-5d718d6149c9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-905eb5d2-cdee-416f-87ef-5d718d6149c9 in namespace emptydir-wrapper-158, will wait for the garbage collector to delete the pods Jan 4 12:11:06.393: INFO: Deleting ReplicationController wrapped-volume-race-905eb5d2-cdee-416f-87ef-5d718d6149c9 took: 11.226451ms Jan 4 12:11:06.793: INFO: Terminating ReplicationController wrapped-volume-race-905eb5d2-cdee-416f-87ef-5d718d6149c9 pods took: 400.315993ms STEP: Creating RC which spawns configmap-volume pods Jan 4 12:11:57.371: INFO: Pod name wrapped-volume-race-5d683e57-1ac9-4001-a899-be9f494464aa: Found 0 pods out of 5 Jan 4 12:12:02.479: INFO: Pod name wrapped-volume-race-5d683e57-1ac9-4001-a899-be9f494464aa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5d683e57-1ac9-4001-a899-be9f494464aa in namespace emptydir-wrapper-158, will wait for the garbage collector to delete the pods Jan 4 12:12:52.687: INFO: Deleting ReplicationController wrapped-volume-race-5d683e57-1ac9-4001-a899-be9f494464aa took: 32.310875ms Jan 4 12:12:53.087: INFO: Terminating ReplicationController wrapped-volume-race-5d683e57-1ac9-4001-a899-be9f494464aa pods took: 400.456629ms STEP: Creating RC which spawns configmap-volume pods Jan 4 12:13:37.647: INFO: Pod name wrapped-volume-race-a81bccb9-f235-418c-92c2-3cc7cc251f82: Found 0 pods out of 5 Jan 4 12:13:42.696: INFO: Pod name wrapped-volume-race-a81bccb9-f235-418c-92c2-3cc7cc251f82: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a81bccb9-f235-418c-92c2-3cc7cc251f82 in namespace emptydir-wrapper-158, will wait for the garbage collector to delete the pods Jan 4 12:14:16.800: INFO: Deleting ReplicationController wrapped-volume-race-a81bccb9-f235-418c-92c2-3cc7cc251f82 took: 14.405642ms Jan 4 12:14:17.300: INFO: Terminating ReplicationController wrapped-volume-race-a81bccb9-f235-418c-92c2-3cc7cc251f82 pods took: 500.278484ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:15:18.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-158" for this suite. Jan 4 12:15:28.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:15:28.849: INFO: namespace emptydir-wrapper-158 deletion completed in 10.122513646s • [SLOW TEST:300.139 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:15:28.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 4 12:15:28.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5412' Jan 4 12:15:31.422: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 12:15:31.422: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 4 12:15:31.494: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5tr4q] Jan 4 12:15:31.495: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5tr4q" in namespace "kubectl-5412" to be "running and ready" Jan 4 12:15:31.505: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02118ms Jan 4 12:15:33.514: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019404305s Jan 4 12:15:35.521: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026557156s Jan 4 12:15:37.528: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033568476s Jan 4 12:15:39.541: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046049342s Jan 4 12:15:41.551: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056480904s Jan 4 12:15:43.566: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071704517s Jan 4 12:15:45.589: INFO: Pod "e2e-test-nginx-rc-5tr4q": Phase="Running", Reason="", readiness=true. Elapsed: 14.094106733s Jan 4 12:15:45.589: INFO: Pod "e2e-test-nginx-rc-5tr4q" satisfied condition "running and ready" Jan 4 12:15:45.589: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5tr4q] Jan 4 12:15:45.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5412' Jan 4 12:15:45.865: INFO: stderr: "" Jan 4 12:15:45.865: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 4 12:15:45.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5412' Jan 4 12:15:46.970: INFO: stderr: "" Jan 4 12:15:46.970: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:15:46.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5412" for this suite. Jan 4 12:16:11.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:16:11.146: INFO: namespace kubectl-5412 deletion completed in 24.157723107s • [SLOW TEST:42.297 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:16:11.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 4 12:16:11.223: INFO: Waiting up to 5m0s for pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b" in namespace "emptydir-514" to be "success or failure" Jan 4 12:16:11.363: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 139.710075ms Jan 4 12:16:13.369: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146167216s Jan 4 12:16:15.493: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269921918s Jan 4 12:16:17.504: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280975453s Jan 4 12:16:19.517: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293411575s Jan 4 12:16:21.525: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302071592s Jan 4 12:16:23.545: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.321577054s Jan 4 12:16:25.551: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.328050467s Jan 4 12:16:27.559: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.335256731s STEP: Saw pod success Jan 4 12:16:27.559: INFO: Pod "pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b" satisfied condition "success or failure" Jan 4 12:16:27.562: INFO: Trying to get logs from node iruya-node pod pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b container test-container: STEP: delete the pod Jan 4 12:16:27.760: INFO: Waiting for pod pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b to disappear Jan 4 12:16:27.795: INFO: Pod pod-aac08473-992c-4f2e-b32b-3f7c7acc3e4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:16:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-514" for this suite. Jan 4 12:16:33.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:16:34.038: INFO: namespace emptydir-514 deletion completed in 6.227873162s • [SLOW TEST:22.891 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:16:34.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e9baa2a4-bdb5-4d61-b762-0c7a9466d928 in namespace container-probe-4024 Jan 4 12:16:42.202: INFO: Started pod busybox-e9baa2a4-bdb5-4d61-b762-0c7a9466d928 in namespace container-probe-4024 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 12:16:42.207: INFO: Initial restart count of pod busybox-e9baa2a4-bdb5-4d61-b762-0c7a9466d928 is 0 Jan 4 12:17:34.856: INFO: Restart count of pod container-probe-4024/busybox-e9baa2a4-bdb5-4d61-b762-0c7a9466d928 is now 1 (52.649528194s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:17:34.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4024" for this suite. Jan 4 12:17:40.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:17:41.044: INFO: namespace container-probe-4024 deletion completed in 6.122370917s • [SLOW TEST:67.006 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:17:41.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 4 12:17:59.666: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:17:59.694: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:01.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:01.709: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:03.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:03.708: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:05.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:05.727: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:07.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:07.705: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:09.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:09.703: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:11.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:11.716: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:13.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:13.704: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:15.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:15.705: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:18:17.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:18:17.702: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:18:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4517" for this suite. Jan 4 12:18:39.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:18:39.860: INFO: namespace container-lifecycle-hook-4517 deletion completed in 22.152101749s • [SLOW TEST:58.816 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:18:39.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 4 12:18:39.978: INFO: Waiting up to 5m0s for pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545" in namespace "emptydir-4897" to be "success or failure" Jan 4 12:18:39.984: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Pending", Reason="", readiness=false. Elapsed: 5.411498ms Jan 4 12:18:41.995: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016224439s Jan 4 12:18:44.003: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024809532s Jan 4 12:18:46.017: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038870095s Jan 4 12:18:48.028: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049471478s Jan 4 12:18:50.034: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055327635s STEP: Saw pod success Jan 4 12:18:50.034: INFO: Pod "pod-ee23f038-7caf-47f2-b1d4-72a4887f8545" satisfied condition "success or failure" Jan 4 12:18:50.037: INFO: Trying to get logs from node iruya-node pod pod-ee23f038-7caf-47f2-b1d4-72a4887f8545 container test-container: STEP: delete the pod Jan 4 12:18:50.138: INFO: Waiting for pod pod-ee23f038-7caf-47f2-b1d4-72a4887f8545 to disappear Jan 4 12:18:50.145: INFO: Pod pod-ee23f038-7caf-47f2-b1d4-72a4887f8545 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:18:50.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4897" for this suite. Jan 4 12:18:56.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:18:56.358: INFO: namespace emptydir-4897 deletion completed in 6.206344176s • [SLOW TEST:16.498 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:18:56.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 4 12:19:12.682: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 12:19:12.725: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 12:19:14.725: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 12:19:14.742: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 12:19:16.725: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 12:19:16.762: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 12:19:18.725: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 12:19:18.747: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 12:19:20.725: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 12:19:20.737: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:19:20.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2306" for this suite. Jan 4 12:19:42.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:19:42.897: INFO: namespace container-lifecycle-hook-2306 deletion completed in 22.115306023s • [SLOW TEST:46.538 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:19:42.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7928 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 4 12:19:43.031: INFO: Found 0 stateful pods, waiting for 3 Jan 4 12:19:53.046: INFO: Found 2 stateful pods, waiting for 3 Jan 4 12:20:03.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 12:20:03.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 12:20:03.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 4 12:20:13.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 12:20:13.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 12:20:13.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 4 12:20:13.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7928 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 12:20:13.489: INFO: stderr: "I0104 12:20:13.286899 1254 log.go:172] (0xc0008f4630) (0xc00032ab40) Create stream\nI0104 12:20:13.287002 1254 log.go:172] (0xc0008f4630) (0xc00032ab40) Stream added, broadcasting: 1\nI0104 12:20:13.290241 1254 log.go:172] (0xc0008f4630) Reply frame received for 1\nI0104 12:20:13.290291 1254 log.go:172] (0xc0008f4630) (0xc000628000) Create stream\nI0104 12:20:13.290308 1254 log.go:172] (0xc0008f4630) (0xc000628000) Stream added, broadcasting: 3\nI0104 12:20:13.291326 1254 log.go:172] (0xc0008f4630) Reply frame received for 3\nI0104 12:20:13.291374 1254 log.go:172] (0xc0008f4630) (0xc0007b8000) Create stream\nI0104 12:20:13.291402 1254 log.go:172] (0xc0008f4630) (0xc0007b8000) Stream added, broadcasting: 5\nI0104 12:20:13.292437 1254 log.go:172] (0xc0008f4630) Reply frame received for 5\nI0104 12:20:13.372373 1254 log.go:172] (0xc0008f4630) Data frame received for 5\nI0104 12:20:13.372428 1254 log.go:172] (0xc0007b8000) (5) Data frame handling\nI0104 12:20:13.372449 1254 log.go:172] (0xc0007b8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 12:20:13.403910 1254 log.go:172] (0xc0008f4630) Data frame received for 3\nI0104 12:20:13.403936 1254 log.go:172] (0xc000628000) (3) Data frame handling\nI0104 12:20:13.403947 1254 log.go:172] (0xc000628000) (3) Data frame sent\nI0104 12:20:13.480388 1254 log.go:172] (0xc0008f4630) Data frame received for 1\nI0104 12:20:13.480557 1254 log.go:172] (0xc0008f4630) (0xc000628000) Stream removed, broadcasting: 3\nI0104 12:20:13.480644 1254 log.go:172] (0xc00032ab40) (1) Data frame handling\nI0104 12:20:13.480724 1254 log.go:172] (0xc0008f4630) (0xc0007b8000) Stream removed, broadcasting: 5\nI0104 12:20:13.480867 1254 log.go:172] (0xc00032ab40) (1) Data frame sent\nI0104 12:20:13.480942 1254 log.go:172] (0xc0008f4630) (0xc00032ab40) Stream removed, broadcasting: 1\nI0104 12:20:13.480957 1254 log.go:172] (0xc0008f4630) Go away received\nI0104 12:20:13.483714 1254 log.go:172] (0xc0008f4630) (0xc00032ab40) Stream removed, broadcasting: 1\nI0104 12:20:13.483961 1254 log.go:172] (0xc0008f4630) (0xc000628000) Stream removed, broadcasting: 3\nI0104 12:20:13.484022 1254 log.go:172] (0xc0008f4630) (0xc0007b8000) Stream removed, broadcasting: 5\n" Jan 4 12:20:13.489: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 12:20:13.489: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 4 12:20:23.540: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 4 12:20:33.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7928 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 12:20:34.025: INFO: stderr: "I0104 12:20:33.811047 1274 log.go:172] (0xc0009dc420) (0xc000702640) Create stream\nI0104 12:20:33.811190 1274 log.go:172] (0xc0009dc420) (0xc000702640) Stream added, broadcasting: 1\nI0104 12:20:33.814155 1274 log.go:172] (0xc0009dc420) Reply frame received for 1\nI0104 12:20:33.814177 1274 log.go:172] (0xc0009dc420) (0xc000702780) Create stream\nI0104 12:20:33.814185 1274 log.go:172] (0xc0009dc420) (0xc000702780) Stream added, broadcasting: 3\nI0104 12:20:33.815119 1274 log.go:172] (0xc0009dc420) Reply frame received for 3\nI0104 12:20:33.815146 1274 log.go:172] (0xc0009dc420) (0xc00056a0a0) Create stream\nI0104 12:20:33.815172 1274 log.go:172] (0xc0009dc420) (0xc00056a0a0) Stream added, broadcasting: 5\nI0104 12:20:33.816097 1274 log.go:172] (0xc0009dc420) Reply frame received for 5\nI0104 12:20:33.924635 1274 log.go:172] (0xc0009dc420) Data frame received for 3\nI0104 12:20:33.924794 1274 log.go:172] (0xc000702780) (3) Data frame handling\nI0104 12:20:33.924817 1274 log.go:172] (0xc000702780) (3) Data frame sent\nI0104 12:20:33.924865 1274 log.go:172] (0xc0009dc420) Data frame received for 5\nI0104 12:20:33.924988 1274 log.go:172] (0xc00056a0a0) (5) Data frame handling\nI0104 12:20:33.924998 1274 log.go:172] (0xc00056a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 12:20:34.017799 1274 log.go:172] (0xc0009dc420) (0xc000702780) Stream removed, broadcasting: 3\nI0104 12:20:34.017953 1274 log.go:172] (0xc0009dc420) Data frame received for 1\nI0104 12:20:34.017968 1274 log.go:172] (0xc000702640) (1) Data frame handling\nI0104 12:20:34.017982 1274 log.go:172] (0xc000702640) (1) Data frame sent\nI0104 12:20:34.017991 1274 log.go:172] (0xc0009dc420) (0xc000702640) Stream removed, broadcasting: 1\nI0104 12:20:34.018388 1274 log.go:172] (0xc0009dc420) (0xc00056a0a0) Stream removed, broadcasting: 5\nI0104 12:20:34.018416 1274 log.go:172] (0xc0009dc420) (0xc000702640) Stream removed, broadcasting: 1\nI0104 12:20:34.018437 1274 log.go:172] (0xc0009dc420) (0xc000702780) Stream removed, broadcasting: 3\nI0104 12:20:34.018454 1274 log.go:172] (0xc0009dc420) (0xc00056a0a0) Stream removed, broadcasting: 5\nI0104 12:20:34.018665 1274 log.go:172] (0xc0009dc420) Go away received\n" Jan 4 12:20:34.025: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 12:20:34.025: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 12:20:44.108: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:20:44.108: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:20:44.108: INFO: Waiting for Pod statefulset-7928/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:20:44.108: INFO: Waiting for Pod statefulset-7928/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:20:54.118: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:20:54.118: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:20:54.118: INFO: Waiting for Pod statefulset-7928/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:21:04.432: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:21:04.432: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:21:14.122: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:21:14.122: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 12:21:24.180: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update STEP: Rolling back to a previous revision Jan 4 12:21:34.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7928 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 12:21:34.664: INFO: stderr: "I0104 12:21:34.272229 1294 log.go:172] (0xc0007ba420) (0xc000782640) Create stream\nI0104 12:21:34.272316 1294 log.go:172] (0xc0007ba420) (0xc000782640) Stream added, broadcasting: 1\nI0104 12:21:34.275594 1294 log.go:172] (0xc0007ba420) Reply frame received for 1\nI0104 12:21:34.275614 1294 log.go:172] (0xc0007ba420) (0xc0007826e0) Create stream\nI0104 12:21:34.275619 1294 log.go:172] (0xc0007ba420) (0xc0007826e0) Stream added, broadcasting: 3\nI0104 12:21:34.276754 1294 log.go:172] (0xc0007ba420) Reply frame received for 3\nI0104 12:21:34.276774 1294 log.go:172] (0xc0007ba420) (0xc0001fa280) Create stream\nI0104 12:21:34.276784 1294 log.go:172] (0xc0007ba420) (0xc0001fa280) Stream added, broadcasting: 5\nI0104 12:21:34.277997 1294 log.go:172] (0xc0007ba420) Reply frame received for 5\nI0104 12:21:34.407514 1294 log.go:172] (0xc0007ba420) Data frame received for 5\nI0104 12:21:34.407535 1294 log.go:172] (0xc0001fa280) (5) Data frame handling\nI0104 12:21:34.407541 1294 log.go:172] (0xc0001fa280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 12:21:34.483461 1294 log.go:172] (0xc0007ba420) Data frame received for 3\nI0104 12:21:34.483510 1294 log.go:172] (0xc0007826e0) (3) Data frame handling\nI0104 12:21:34.483528 1294 log.go:172] (0xc0007826e0) (3) Data frame sent\nI0104 12:21:34.654499 1294 log.go:172] (0xc0007ba420) (0xc0007826e0) Stream removed, broadcasting: 3\nI0104 12:21:34.654710 1294 log.go:172] (0xc0007ba420) Data frame received for 1\nI0104 12:21:34.654729 1294 log.go:172] (0xc000782640) (1) Data frame handling\nI0104 12:21:34.654772 1294 log.go:172] (0xc000782640) (1) Data frame sent\nI0104 12:21:34.654789 1294 log.go:172] (0xc0007ba420) (0xc000782640) Stream removed, broadcasting: 1\nI0104 12:21:34.655646 1294 log.go:172] (0xc0007ba420) (0xc0001fa280) Stream removed, broadcasting: 5\nI0104 12:21:34.655699 1294 log.go:172] (0xc0007ba420) (0xc000782640) Stream removed, broadcasting: 1\nI0104 12:21:34.655714 1294 log.go:172] (0xc0007ba420) (0xc0007826e0) Stream removed, broadcasting: 3\nI0104 12:21:34.655731 1294 log.go:172] (0xc0007ba420) (0xc0001fa280) Stream removed, broadcasting: 5\n" Jan 4 12:21:34.665: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 12:21:34.665: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 12:21:34.785: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 4 12:21:44.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7928 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 12:21:45.278: INFO: stderr: "I0104 12:21:45.064274 1313 log.go:172] (0xc000151080) (0xc0001d0b40) Create stream\nI0104 12:21:45.064430 1313 log.go:172] (0xc000151080) (0xc0001d0b40) Stream added, broadcasting: 1\nI0104 12:21:45.081025 1313 log.go:172] (0xc000151080) Reply frame received for 1\nI0104 12:21:45.081070 1313 log.go:172] (0xc000151080) (0xc0001d0280) Create stream\nI0104 12:21:45.081077 1313 log.go:172] (0xc000151080) (0xc0001d0280) Stream added, broadcasting: 3\nI0104 12:21:45.082176 1313 log.go:172] (0xc000151080) Reply frame received for 3\nI0104 12:21:45.082203 1313 log.go:172] (0xc000151080) (0xc0000b0000) Create stream\nI0104 12:21:45.082214 1313 log.go:172] (0xc000151080) (0xc0000b0000) Stream added, broadcasting: 5\nI0104 12:21:45.083248 1313 log.go:172] (0xc000151080) Reply frame received for 5\nI0104 12:21:45.177606 1313 log.go:172] (0xc000151080) Data frame received for 5\nI0104 12:21:45.177753 1313 log.go:172] (0xc0000b0000) (5) Data frame handling\nI0104 12:21:45.177796 1313 log.go:172] (0xc0000b0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 12:21:45.177842 1313 log.go:172] (0xc000151080) Data frame received for 3\nI0104 12:21:45.177871 1313 log.go:172] (0xc0001d0280) (3) Data frame handling\nI0104 12:21:45.177944 1313 log.go:172] (0xc0001d0280) (3) Data frame sent\nI0104 12:21:45.269440 1313 log.go:172] (0xc000151080) Data frame received for 1\nI0104 12:21:45.269588 1313 log.go:172] (0xc0001d0b40) (1) Data frame handling\nI0104 12:21:45.269626 1313 log.go:172] (0xc0001d0b40) (1) Data frame sent\nI0104 12:21:45.269650 1313 log.go:172] (0xc000151080) (0xc0001d0b40) Stream removed, broadcasting: 1\nI0104 12:21:45.271180 1313 log.go:172] (0xc000151080) (0xc0001d0280) Stream removed, broadcasting: 3\nI0104 12:21:45.271465 1313 log.go:172] (0xc000151080) (0xc0000b0000) Stream removed, broadcasting: 5\nI0104 12:21:45.271504 1313 log.go:172] (0xc000151080) (0xc0001d0b40) Stream removed, broadcasting: 1\nI0104 12:21:45.271516 1313 log.go:172] (0xc000151080) (0xc0001d0280) Stream removed, broadcasting: 3\nI0104 12:21:45.271527 1313 log.go:172] (0xc000151080) (0xc0000b0000) Stream removed, broadcasting: 5\n" Jan 4 12:21:45.278: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 12:21:45.278: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 12:21:55.314: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:21:55.314: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:21:55.314: INFO: Waiting for Pod statefulset-7928/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:22:05.330: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:22:05.331: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:22:05.331: INFO: Waiting for Pod statefulset-7928/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:22:15.325: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:22:15.325: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:22:25.321: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update Jan 4 12:22:25.321: INFO: Waiting for Pod statefulset-7928/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 12:22:35.330: INFO: Waiting for StatefulSet statefulset-7928/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 4 12:22:45.340: INFO: Deleting all statefulset in ns statefulset-7928 Jan 4 12:22:45.348: INFO: Scaling statefulset ss2 to 0 Jan 4 12:23:35.394: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 12:23:35.400: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:23:35.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7928" for this suite. Jan 4 12:23:43.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:23:43.647: INFO: namespace statefulset-7928 deletion completed in 8.17044574s • [SLOW TEST:240.750 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:23:43.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 4 12:23:43.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8898' Jan 4 12:23:43.853: INFO: stderr: "" Jan 4 12:23:43.853: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 4 12:23:43.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8898' Jan 4 12:23:50.773: INFO: stderr: "" Jan 4 12:23:50.774: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:23:50.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8898" for this suite. Jan 4 12:23:56.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:23:56.985: INFO: namespace kubectl-8898 deletion completed in 6.203319445s • [SLOW TEST:13.337 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:23:56.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 4 12:24:07.769: INFO: Successfully updated pod "labelsupdate9b2375b7-3434-4c49-ac38-2586ba230385" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:24:09.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4274" for this suite. Jan 4 12:24:31.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:24:32.095: INFO: namespace downward-api-4274 deletion completed in 22.177037987s • [SLOW TEST:35.110 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:24:32.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-10a9933f-87c3-4972-abe4-72799fa9d1e9 STEP: Creating a pod to test consume configMaps Jan 4 12:24:32.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef" in namespace "configmap-1073" to be "success or failure" Jan 4 12:24:32.237: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158294ms Jan 4 12:24:34.244: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016432835s Jan 4 12:24:36.256: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028589663s Jan 4 12:24:38.328: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099801046s Jan 4 12:24:40.336: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Running", Reason="", readiness=true. Elapsed: 8.107775736s Jan 4 12:24:42.344: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116150928s STEP: Saw pod success Jan 4 12:24:42.344: INFO: Pod "pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef" satisfied condition "success or failure" Jan 4 12:24:42.346: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef container configmap-volume-test: STEP: delete the pod Jan 4 12:24:42.480: INFO: Waiting for pod pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef to disappear Jan 4 12:24:42.489: INFO: Pod pod-configmaps-d61901fc-9911-46f3-8a55-aa6e1913ecef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:24:42.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1073" for this suite. Jan 4 12:24:48.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:24:48.639: INFO: namespace configmap-1073 deletion completed in 6.145655385s • [SLOW TEST:16.544 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:24:48.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5744.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5744.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5744.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5744.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 12:25:02.871: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.876: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.880: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5744.svc.cluster.local from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.885: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.892: INFO: Unable to read jessie_udp@PodARecord from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.898: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56: the server could not find the requested resource (get pods dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56) Jan 4 12:25:02.899: INFO: Lookups using dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5744.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 4 12:25:08.033: INFO: DNS probes using dns-5744/dns-test-d681acdc-8539-402d-986a-8fbbbcfcce56 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:25:08.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5744" for this suite. Jan 4 12:25:14.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:25:14.285: INFO: namespace dns-5744 deletion completed in 6.194890314s • [SLOW TEST:25.645 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:25:14.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:25:14.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3501" for this suite. Jan 4 12:25:36.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:25:36.800: INFO: namespace kubelet-test-3501 deletion completed in 22.335991945s • [SLOW TEST:22.514 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:25:36.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 4 12:25:36.855: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 12:25:36.948: INFO: Waiting for terminating namespaces to be deleted... Jan 4 12:25:36.950: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 4 12:25:36.960: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.960: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 12:25:36.960: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 4 12:25:36.960: INFO: Container weave ready: true, restart count 0 Jan 4 12:25:36.960: INFO: Container weave-npc ready: true, restart count 0 Jan 4 12:25:36.960: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 4 12:25:36.967: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container kube-scheduler ready: true, restart count 12 Jan 4 12:25:36.967: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container coredns ready: true, restart count 0 Jan 4 12:25:36.967: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container etcd ready: true, restart count 0 Jan 4 12:25:36.967: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 4 12:25:36.967: INFO: Container weave ready: true, restart count 0 Jan 4 12:25:36.967: INFO: Container weave-npc ready: true, restart count 0 Jan 4 12:25:36.967: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container coredns ready: true, restart count 0 Jan 4 12:25:36.967: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container kube-controller-manager ready: true, restart count 17 Jan 4 12:25:36.967: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 12:25:36.967: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 4 12:25:36.967: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e6aea391809e99], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:25:37.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-759" for this suite. Jan 4 12:25:44.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:25:44.139: INFO: namespace sched-pred-759 deletion completed in 6.145611996s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.339 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:25:44.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 4 12:25:44.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 4 12:25:46.453: INFO: stderr: "" Jan 4 12:25:46.453: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:25:46.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9937" for this suite. Jan 4 12:25:52.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:25:52.636: INFO: namespace kubectl-9937 deletion completed in 6.173763635s • [SLOW TEST:8.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:25:52.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-vxbk STEP: Creating a pod to test atomic-volume-subpath Jan 4 12:25:52.890: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vxbk" in namespace "subpath-9196" to be "success or failure" Jan 4 12:25:52.896: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.783148ms Jan 4 12:25:54.958: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068167798s Jan 4 12:25:56.963: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073554229s Jan 4 12:25:58.992: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101795916s Jan 4 12:26:01.006: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.116064461s Jan 4 12:26:03.022: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 10.132493885s Jan 4 12:26:05.600: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 12.710736582s Jan 4 12:26:07.614: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 14.724674126s Jan 4 12:26:09.627: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 16.737645252s Jan 4 12:26:11.635: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 18.745286597s Jan 4 12:26:13.652: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 20.762152077s Jan 4 12:26:15.661: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 22.771642239s Jan 4 12:26:17.674: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 24.784505831s Jan 4 12:26:19.681: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 26.791165589s Jan 4 12:26:21.691: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Running", Reason="", readiness=true. Elapsed: 28.801462593s Jan 4 12:26:23.699: INFO: Pod "pod-subpath-test-projected-vxbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.80897336s STEP: Saw pod success Jan 4 12:26:23.699: INFO: Pod "pod-subpath-test-projected-vxbk" satisfied condition "success or failure" Jan 4 12:26:23.703: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-vxbk container test-container-subpath-projected-vxbk: STEP: delete the pod Jan 4 12:26:24.322: INFO: Waiting for pod pod-subpath-test-projected-vxbk to disappear Jan 4 12:26:24.332: INFO: Pod pod-subpath-test-projected-vxbk no longer exists STEP: Deleting pod pod-subpath-test-projected-vxbk Jan 4 12:26:24.333: INFO: Deleting pod "pod-subpath-test-projected-vxbk" in namespace "subpath-9196" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:26:24.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9196" for this suite. Jan 4 12:26:30.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:26:30.502: INFO: namespace subpath-9196 deletion completed in 6.164496397s • [SLOW TEST:37.865 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:26:30.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 4 12:26:56.659: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:56.659: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:56.757934 8 log.go:172] (0xc0008b04d0) (0xc001d09d60) Create stream I0104 12:26:56.757981 8 log.go:172] (0xc0008b04d0) (0xc001d09d60) Stream added, broadcasting: 1 I0104 12:26:56.766702 8 log.go:172] (0xc0008b04d0) Reply frame received for 1 I0104 12:26:56.766804 8 log.go:172] (0xc0008b04d0) (0xc002f46a00) Create stream I0104 12:26:56.766834 8 log.go:172] (0xc0008b04d0) (0xc002f46a00) Stream added, broadcasting: 3 I0104 12:26:56.770158 8 log.go:172] (0xc0008b04d0) Reply frame received for 3 I0104 12:26:56.770203 8 log.go:172] (0xc0008b04d0) (0xc001726460) Create stream I0104 12:26:56.770223 8 log.go:172] (0xc0008b04d0) (0xc001726460) Stream added, broadcasting: 5 I0104 12:26:56.772983 8 log.go:172] (0xc0008b04d0) Reply frame received for 5 I0104 12:26:56.887672 8 log.go:172] (0xc0008b04d0) Data frame received for 3 I0104 12:26:56.888185 8 log.go:172] (0xc002f46a00) (3) Data frame handling I0104 12:26:56.888229 8 log.go:172] (0xc002f46a00) (3) Data frame sent I0104 12:26:57.082729 8 log.go:172] (0xc0008b04d0) (0xc002f46a00) Stream removed, broadcasting: 3 I0104 12:26:57.082812 8 log.go:172] (0xc0008b04d0) Data frame received for 1 I0104 12:26:57.082849 8 log.go:172] (0xc001d09d60) (1) Data frame handling I0104 12:26:57.082866 8 log.go:172] (0xc001d09d60) (1) Data frame sent I0104 12:26:57.082888 8 log.go:172] (0xc0008b04d0) (0xc001726460) Stream removed, broadcasting: 5 I0104 12:26:57.082929 8 log.go:172] (0xc0008b04d0) (0xc001d09d60) Stream removed, broadcasting: 1 I0104 12:26:57.082964 8 log.go:172] (0xc0008b04d0) Go away received I0104 12:26:57.083118 8 log.go:172] (0xc0008b04d0) (0xc001d09d60) Stream removed, broadcasting: 1 I0104 12:26:57.083141 8 log.go:172] (0xc0008b04d0) (0xc002f46a00) Stream removed, broadcasting: 3 I0104 12:26:57.083159 8 log.go:172] (0xc0008b04d0) (0xc001726460) Stream removed, broadcasting: 5 Jan 4 12:26:57.083: INFO: Exec stderr: "" Jan 4 12:26:57.083: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:57.083: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:57.146136 8 log.go:172] (0xc000f246e0) (0xc00218a780) Create stream I0104 12:26:57.146215 8 log.go:172] (0xc000f246e0) (0xc00218a780) Stream added, broadcasting: 1 I0104 12:26:57.154468 8 log.go:172] (0xc000f246e0) Reply frame received for 1 I0104 12:26:57.154582 8 log.go:172] (0xc000f246e0) (0xc001d09e00) Create stream I0104 12:26:57.154592 8 log.go:172] (0xc000f246e0) (0xc001d09e00) Stream added, broadcasting: 3 I0104 12:26:57.158188 8 log.go:172] (0xc000f246e0) Reply frame received for 3 I0104 12:26:57.158217 8 log.go:172] (0xc000f246e0) (0xc00218a820) Create stream I0104 12:26:57.158223 8 log.go:172] (0xc000f246e0) (0xc00218a820) Stream added, broadcasting: 5 I0104 12:26:57.161445 8 log.go:172] (0xc000f246e0) Reply frame received for 5 I0104 12:26:57.272749 8 log.go:172] (0xc000f246e0) Data frame received for 3 I0104 12:26:57.272807 8 log.go:172] (0xc001d09e00) (3) Data frame handling I0104 12:26:57.272837 8 log.go:172] (0xc001d09e00) (3) Data frame sent I0104 12:26:57.412833 8 log.go:172] (0xc000f246e0) (0xc001d09e00) Stream removed, broadcasting: 3 I0104 12:26:57.413083 8 log.go:172] (0xc000f246e0) Data frame received for 1 I0104 12:26:57.413184 8 log.go:172] (0xc000f246e0) (0xc00218a820) Stream removed, broadcasting: 5 I0104 12:26:57.413262 8 log.go:172] (0xc00218a780) (1) Data frame handling I0104 12:26:57.413353 8 log.go:172] (0xc00218a780) (1) Data frame sent I0104 12:26:57.413375 8 log.go:172] (0xc000f246e0) (0xc00218a780) Stream removed, broadcasting: 1 I0104 12:26:57.413478 8 log.go:172] (0xc000f246e0) (0xc00218a780) Stream removed, broadcasting: 1 I0104 12:26:57.413489 8 log.go:172] (0xc000f246e0) (0xc001d09e00) Stream removed, broadcasting: 3 I0104 12:26:57.413497 8 log.go:172] (0xc000f246e0) (0xc00218a820) Stream removed, broadcasting: 5 I0104 12:26:57.414076 8 log.go:172] (0xc000f246e0) Go away received Jan 4 12:26:57.414: INFO: Exec stderr: "" Jan 4 12:26:57.414: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:57.414: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:57.516616 8 log.go:172] (0xc000d711e0) (0xc001726780) Create stream I0104 12:26:57.516666 8 log.go:172] (0xc000d711e0) (0xc001726780) Stream added, broadcasting: 1 I0104 12:26:57.522233 8 log.go:172] (0xc000d711e0) Reply frame received for 1 I0104 12:26:57.522269 8 log.go:172] (0xc000d711e0) (0xc001d09f40) Create stream I0104 12:26:57.522282 8 log.go:172] (0xc000d711e0) (0xc001d09f40) Stream added, broadcasting: 3 I0104 12:26:57.523891 8 log.go:172] (0xc000d711e0) Reply frame received for 3 I0104 12:26:57.523905 8 log.go:172] (0xc000d711e0) (0xc001726960) Create stream I0104 12:26:57.523910 8 log.go:172] (0xc000d711e0) (0xc001726960) Stream added, broadcasting: 5 I0104 12:26:57.526307 8 log.go:172] (0xc000d711e0) Reply frame received for 5 I0104 12:26:57.616729 8 log.go:172] (0xc000d711e0) Data frame received for 3 I0104 12:26:57.616880 8 log.go:172] (0xc001d09f40) (3) Data frame handling I0104 12:26:57.616964 8 log.go:172] (0xc001d09f40) (3) Data frame sent I0104 12:26:57.822791 8 log.go:172] (0xc000d711e0) Data frame received for 1 I0104 12:26:57.823100 8 log.go:172] (0xc001726780) (1) Data frame handling I0104 12:26:57.823119 8 log.go:172] (0xc001726780) (1) Data frame sent I0104 12:26:57.824099 8 log.go:172] (0xc000d711e0) (0xc001726780) Stream removed, broadcasting: 1 I0104 12:26:57.824645 8 log.go:172] (0xc000d711e0) (0xc001d09f40) Stream removed, broadcasting: 3 I0104 12:26:57.824690 8 log.go:172] (0xc000d711e0) (0xc001726960) Stream removed, broadcasting: 5 I0104 12:26:57.824716 8 log.go:172] (0xc000d711e0) (0xc001726780) Stream removed, broadcasting: 1 I0104 12:26:57.824730 8 log.go:172] (0xc000d711e0) (0xc001d09f40) Stream removed, broadcasting: 3 I0104 12:26:57.824743 8 log.go:172] (0xc000d711e0) (0xc001726960) Stream removed, broadcasting: 5 Jan 4 12:26:57.824: INFO: Exec stderr: "" Jan 4 12:26:57.824: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:57.824: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:57.825292 8 log.go:172] (0xc000d711e0) Go away received I0104 12:26:57.910059 8 log.go:172] (0xc0008b1600) (0xc0019303c0) Create stream I0104 12:26:57.910179 8 log.go:172] (0xc0008b1600) (0xc0019303c0) Stream added, broadcasting: 1 I0104 12:26:57.916718 8 log.go:172] (0xc0008b1600) Reply frame received for 1 I0104 12:26:57.916788 8 log.go:172] (0xc0008b1600) (0xc002f46b40) Create stream I0104 12:26:57.916796 8 log.go:172] (0xc0008b1600) (0xc002f46b40) Stream added, broadcasting: 3 I0104 12:26:57.919105 8 log.go:172] (0xc0008b1600) Reply frame received for 3 I0104 12:26:57.919127 8 log.go:172] (0xc0008b1600) (0xc00218a8c0) Create stream I0104 12:26:57.919135 8 log.go:172] (0xc0008b1600) (0xc00218a8c0) Stream added, broadcasting: 5 I0104 12:26:57.920578 8 log.go:172] (0xc0008b1600) Reply frame received for 5 I0104 12:26:57.995901 8 log.go:172] (0xc0008b1600) Data frame received for 3 I0104 12:26:57.995941 8 log.go:172] (0xc002f46b40) (3) Data frame handling I0104 12:26:57.995960 8 log.go:172] (0xc002f46b40) (3) Data frame sent I0104 12:26:58.139895 8 log.go:172] (0xc0008b1600) (0xc002f46b40) Stream removed, broadcasting: 3 I0104 12:26:58.140081 8 log.go:172] (0xc0008b1600) Data frame received for 1 I0104 12:26:58.140105 8 log.go:172] (0xc0019303c0) (1) Data frame handling I0104 12:26:58.140134 8 log.go:172] (0xc0008b1600) (0xc00218a8c0) Stream removed, broadcasting: 5 I0104 12:26:58.140184 8 log.go:172] (0xc0019303c0) (1) Data frame sent I0104 12:26:58.140220 8 log.go:172] (0xc0008b1600) (0xc0019303c0) Stream removed, broadcasting: 1 I0104 12:26:58.140247 8 log.go:172] (0xc0008b1600) Go away received I0104 12:26:58.140602 8 log.go:172] (0xc0008b1600) (0xc0019303c0) Stream removed, broadcasting: 1 I0104 12:26:58.140633 8 log.go:172] (0xc0008b1600) (0xc002f46b40) Stream removed, broadcasting: 3 I0104 12:26:58.140653 8 log.go:172] (0xc0008b1600) (0xc00218a8c0) Stream removed, broadcasting: 5 Jan 4 12:26:58.140: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 4 12:26:58.140: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:58.140: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:58.228298 8 log.go:172] (0xc000265ce0) (0xc002f46fa0) Create stream I0104 12:26:58.228415 8 log.go:172] (0xc000265ce0) (0xc002f46fa0) Stream added, broadcasting: 1 I0104 12:26:58.242338 8 log.go:172] (0xc000265ce0) Reply frame received for 1 I0104 12:26:58.242400 8 log.go:172] (0xc000265ce0) (0xc001726a00) Create stream I0104 12:26:58.242406 8 log.go:172] (0xc000265ce0) (0xc001726a00) Stream added, broadcasting: 3 I0104 12:26:58.244311 8 log.go:172] (0xc000265ce0) Reply frame received for 3 I0104 12:26:58.244342 8 log.go:172] (0xc000265ce0) (0xc001930460) Create stream I0104 12:26:58.244350 8 log.go:172] (0xc000265ce0) (0xc001930460) Stream added, broadcasting: 5 I0104 12:26:58.245900 8 log.go:172] (0xc000265ce0) Reply frame received for 5 I0104 12:26:58.398021 8 log.go:172] (0xc000265ce0) Data frame received for 3 I0104 12:26:58.398306 8 log.go:172] (0xc001726a00) (3) Data frame handling I0104 12:26:58.398349 8 log.go:172] (0xc001726a00) (3) Data frame sent I0104 12:26:58.724148 8 log.go:172] (0xc000265ce0) Data frame received for 1 I0104 12:26:58.724476 8 log.go:172] (0xc000265ce0) (0xc001726a00) Stream removed, broadcasting: 3 I0104 12:26:58.724540 8 log.go:172] (0xc002f46fa0) (1) Data frame handling I0104 12:26:58.724565 8 log.go:172] (0xc002f46fa0) (1) Data frame sent I0104 12:26:58.724595 8 log.go:172] (0xc000265ce0) (0xc001930460) Stream removed, broadcasting: 5 I0104 12:26:58.724648 8 log.go:172] (0xc000265ce0) (0xc002f46fa0) Stream removed, broadcasting: 1 I0104 12:26:58.724687 8 log.go:172] (0xc000265ce0) Go away received I0104 12:26:58.725075 8 log.go:172] (0xc000265ce0) (0xc002f46fa0) Stream removed, broadcasting: 1 I0104 12:26:58.725091 8 log.go:172] (0xc000265ce0) (0xc001726a00) Stream removed, broadcasting: 3 I0104 12:26:58.725098 8 log.go:172] (0xc000265ce0) (0xc001930460) Stream removed, broadcasting: 5 Jan 4 12:26:58.725: INFO: Exec stderr: "" Jan 4 12:26:58.725: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:58.725: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:58.870205 8 log.go:172] (0xc001acc420) (0xc002f47180) Create stream I0104 12:26:58.870315 8 log.go:172] (0xc001acc420) (0xc002f47180) Stream added, broadcasting: 1 I0104 12:26:58.881427 8 log.go:172] (0xc001acc420) Reply frame received for 1 I0104 12:26:58.881472 8 log.go:172] (0xc001acc420) (0xc00218a960) Create stream I0104 12:26:58.881478 8 log.go:172] (0xc001acc420) (0xc00218a960) Stream added, broadcasting: 3 I0104 12:26:58.884080 8 log.go:172] (0xc001acc420) Reply frame received for 3 I0104 12:26:58.884173 8 log.go:172] (0xc001acc420) (0xc001930960) Create stream I0104 12:26:58.884201 8 log.go:172] (0xc001acc420) (0xc001930960) Stream added, broadcasting: 5 I0104 12:26:58.890700 8 log.go:172] (0xc001acc420) Reply frame received for 5 I0104 12:26:59.018922 8 log.go:172] (0xc001acc420) Data frame received for 3 I0104 12:26:59.018991 8 log.go:172] (0xc00218a960) (3) Data frame handling I0104 12:26:59.019004 8 log.go:172] (0xc00218a960) (3) Data frame sent I0104 12:26:59.132539 8 log.go:172] (0xc001acc420) Data frame received for 1 I0104 12:26:59.132653 8 log.go:172] (0xc002f47180) (1) Data frame handling I0104 12:26:59.132694 8 log.go:172] (0xc002f47180) (1) Data frame sent I0104 12:26:59.132706 8 log.go:172] (0xc001acc420) (0xc002f47180) Stream removed, broadcasting: 1 I0104 12:26:59.133265 8 log.go:172] (0xc001acc420) (0xc00218a960) Stream removed, broadcasting: 3 I0104 12:26:59.133361 8 log.go:172] (0xc001acc420) (0xc001930960) Stream removed, broadcasting: 5 I0104 12:26:59.133397 8 log.go:172] (0xc001acc420) Go away received I0104 12:26:59.133451 8 log.go:172] (0xc001acc420) (0xc002f47180) Stream removed, broadcasting: 1 I0104 12:26:59.133492 8 log.go:172] (0xc001acc420) (0xc00218a960) Stream removed, broadcasting: 3 I0104 12:26:59.133502 8 log.go:172] (0xc001acc420) (0xc001930960) Stream removed, broadcasting: 5 Jan 4 12:26:59.133: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 4 12:26:59.133: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:59.133: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:59.172783 8 log.go:172] (0xc002042370) (0xc001b6d2c0) Create stream I0104 12:26:59.172841 8 log.go:172] (0xc002042370) (0xc001b6d2c0) Stream added, broadcasting: 1 I0104 12:26:59.176700 8 log.go:172] (0xc002042370) Reply frame received for 1 I0104 12:26:59.176727 8 log.go:172] (0xc002042370) (0xc001930aa0) Create stream I0104 12:26:59.176738 8 log.go:172] (0xc002042370) (0xc001930aa0) Stream added, broadcasting: 3 I0104 12:26:59.177881 8 log.go:172] (0xc002042370) Reply frame received for 3 I0104 12:26:59.177898 8 log.go:172] (0xc002042370) (0xc001930b40) Create stream I0104 12:26:59.177904 8 log.go:172] (0xc002042370) (0xc001930b40) Stream added, broadcasting: 5 I0104 12:26:59.178769 8 log.go:172] (0xc002042370) Reply frame received for 5 I0104 12:26:59.257850 8 log.go:172] (0xc002042370) Data frame received for 3 I0104 12:26:59.257884 8 log.go:172] (0xc001930aa0) (3) Data frame handling I0104 12:26:59.257891 8 log.go:172] (0xc001930aa0) (3) Data frame sent I0104 12:26:59.367400 8 log.go:172] (0xc002042370) Data frame received for 1 I0104 12:26:59.367520 8 log.go:172] (0xc001b6d2c0) (1) Data frame handling I0104 12:26:59.367561 8 log.go:172] (0xc001b6d2c0) (1) Data frame sent I0104 12:26:59.368755 8 log.go:172] (0xc002042370) (0xc001930b40) Stream removed, broadcasting: 5 I0104 12:26:59.368815 8 log.go:172] (0xc002042370) (0xc001b6d2c0) Stream removed, broadcasting: 1 I0104 12:26:59.368876 8 log.go:172] (0xc002042370) (0xc001930aa0) Stream removed, broadcasting: 3 I0104 12:26:59.368906 8 log.go:172] (0xc002042370) (0xc001b6d2c0) Stream removed, broadcasting: 1 I0104 12:26:59.368929 8 log.go:172] (0xc002042370) (0xc001930aa0) Stream removed, broadcasting: 3 I0104 12:26:59.368949 8 log.go:172] (0xc002042370) (0xc001930b40) Stream removed, broadcasting: 5 Jan 4 12:26:59.369: INFO: Exec stderr: "" Jan 4 12:26:59.369: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:59.369: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:59.369921 8 log.go:172] (0xc002042370) Go away received I0104 12:26:59.418860 8 log.go:172] (0xc0020428f0) (0xc001b6d400) Create stream I0104 12:26:59.418908 8 log.go:172] (0xc0020428f0) (0xc001b6d400) Stream added, broadcasting: 1 I0104 12:26:59.425166 8 log.go:172] (0xc0020428f0) Reply frame received for 1 I0104 12:26:59.425200 8 log.go:172] (0xc0020428f0) (0xc002f47220) Create stream I0104 12:26:59.425207 8 log.go:172] (0xc0020428f0) (0xc002f47220) Stream added, broadcasting: 3 I0104 12:26:59.427107 8 log.go:172] (0xc0020428f0) Reply frame received for 3 I0104 12:26:59.427131 8 log.go:172] (0xc0020428f0) (0xc001930e60) Create stream I0104 12:26:59.427141 8 log.go:172] (0xc0020428f0) (0xc001930e60) Stream added, broadcasting: 5 I0104 12:26:59.428450 8 log.go:172] (0xc0020428f0) Reply frame received for 5 I0104 12:26:59.531187 8 log.go:172] (0xc0020428f0) Data frame received for 3 I0104 12:26:59.531255 8 log.go:172] (0xc002f47220) (3) Data frame handling I0104 12:26:59.531282 8 log.go:172] (0xc002f47220) (3) Data frame sent I0104 12:26:59.636676 8 log.go:172] (0xc0020428f0) (0xc002f47220) Stream removed, broadcasting: 3 I0104 12:26:59.636814 8 log.go:172] (0xc0020428f0) Data frame received for 1 I0104 12:26:59.636825 8 log.go:172] (0xc001b6d400) (1) Data frame handling I0104 12:26:59.636834 8 log.go:172] (0xc001b6d400) (1) Data frame sent I0104 12:26:59.636838 8 log.go:172] (0xc0020428f0) (0xc001b6d400) Stream removed, broadcasting: 1 I0104 12:26:59.636960 8 log.go:172] (0xc0020428f0) (0xc001930e60) Stream removed, broadcasting: 5 I0104 12:26:59.637009 8 log.go:172] (0xc0020428f0) (0xc001b6d400) Stream removed, broadcasting: 1 I0104 12:26:59.637098 8 log.go:172] (0xc0020428f0) (0xc002f47220) Stream removed, broadcasting: 3 I0104 12:26:59.637108 8 log.go:172] (0xc0020428f0) (0xc001930e60) Stream removed, broadcasting: 5 Jan 4 12:26:59.637: INFO: Exec stderr: "" Jan 4 12:26:59.637: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:59.637: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:59.637191 8 log.go:172] (0xc0020428f0) Go away received I0104 12:26:59.693255 8 log.go:172] (0xc001ce5ef0) (0xc001931540) Create stream I0104 12:26:59.693341 8 log.go:172] (0xc001ce5ef0) (0xc001931540) Stream added, broadcasting: 1 I0104 12:26:59.697728 8 log.go:172] (0xc001ce5ef0) Reply frame received for 1 I0104 12:26:59.697754 8 log.go:172] (0xc001ce5ef0) (0xc002f472c0) Create stream I0104 12:26:59.697762 8 log.go:172] (0xc001ce5ef0) (0xc002f472c0) Stream added, broadcasting: 3 I0104 12:26:59.699054 8 log.go:172] (0xc001ce5ef0) Reply frame received for 3 I0104 12:26:59.699089 8 log.go:172] (0xc001ce5ef0) (0xc001726aa0) Create stream I0104 12:26:59.699101 8 log.go:172] (0xc001ce5ef0) (0xc001726aa0) Stream added, broadcasting: 5 I0104 12:26:59.700066 8 log.go:172] (0xc001ce5ef0) Reply frame received for 5 I0104 12:26:59.803708 8 log.go:172] (0xc001ce5ef0) Data frame received for 3 I0104 12:26:59.803741 8 log.go:172] (0xc002f472c0) (3) Data frame handling I0104 12:26:59.803756 8 log.go:172] (0xc002f472c0) (3) Data frame sent I0104 12:26:59.905345 8 log.go:172] (0xc001ce5ef0) Data frame received for 1 I0104 12:26:59.905402 8 log.go:172] (0xc001ce5ef0) (0xc002f472c0) Stream removed, broadcasting: 3 I0104 12:26:59.905456 8 log.go:172] (0xc001931540) (1) Data frame handling I0104 12:26:59.905467 8 log.go:172] (0xc001931540) (1) Data frame sent I0104 12:26:59.905483 8 log.go:172] (0xc001ce5ef0) (0xc001726aa0) Stream removed, broadcasting: 5 I0104 12:26:59.905500 8 log.go:172] (0xc001ce5ef0) (0xc001931540) Stream removed, broadcasting: 1 I0104 12:26:59.905524 8 log.go:172] (0xc001ce5ef0) Go away received I0104 12:26:59.905625 8 log.go:172] (0xc001ce5ef0) (0xc001931540) Stream removed, broadcasting: 1 I0104 12:26:59.905666 8 log.go:172] (0xc001ce5ef0) (0xc002f472c0) Stream removed, broadcasting: 3 I0104 12:26:59.905678 8 log.go:172] (0xc001ce5ef0) (0xc001726aa0) Stream removed, broadcasting: 5 Jan 4 12:26:59.905: INFO: Exec stderr: "" Jan 4 12:26:59.905: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5114 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:26:59.905: INFO: >>> kubeConfig: /root/.kube/config I0104 12:26:59.956430 8 log.go:172] (0xc002d5c370) (0xc00218af00) Create stream I0104 12:26:59.956464 8 log.go:172] (0xc002d5c370) (0xc00218af00) Stream added, broadcasting: 1 I0104 12:26:59.963135 8 log.go:172] (0xc002d5c370) Reply frame received for 1 I0104 12:26:59.963157 8 log.go:172] (0xc002d5c370) (0xc001b6d540) Create stream I0104 12:26:59.963164 8 log.go:172] (0xc002d5c370) (0xc001b6d540) Stream added, broadcasting: 3 I0104 12:26:59.963944 8 log.go:172] (0xc002d5c370) Reply frame received for 3 I0104 12:26:59.963972 8 log.go:172] (0xc002d5c370) (0xc001726b40) Create stream I0104 12:26:59.963982 8 log.go:172] (0xc002d5c370) (0xc001726b40) Stream added, broadcasting: 5 I0104 12:26:59.965095 8 log.go:172] (0xc002d5c370) Reply frame received for 5 I0104 12:27:00.072009 8 log.go:172] (0xc002d5c370) Data frame received for 3 I0104 12:27:00.072054 8 log.go:172] (0xc001b6d540) (3) Data frame handling I0104 12:27:00.072063 8 log.go:172] (0xc001b6d540) (3) Data frame sent I0104 12:27:00.207997 8 log.go:172] (0xc002d5c370) (0xc001b6d540) Stream removed, broadcasting: 3 I0104 12:27:00.208104 8 log.go:172] (0xc002d5c370) Data frame received for 1 I0104 12:27:00.208121 8 log.go:172] (0xc00218af00) (1) Data frame handling I0104 12:27:00.208139 8 log.go:172] (0xc00218af00) (1) Data frame sent I0104 12:27:00.208152 8 log.go:172] (0xc002d5c370) (0xc00218af00) Stream removed, broadcasting: 1 I0104 12:27:00.208178 8 log.go:172] (0xc002d5c370) (0xc001726b40) Stream removed, broadcasting: 5 I0104 12:27:00.208247 8 log.go:172] (0xc002d5c370) Go away received I0104 12:27:00.208265 8 log.go:172] (0xc002d5c370) (0xc00218af00) Stream removed, broadcasting: 1 I0104 12:27:00.208292 8 log.go:172] (0xc002d5c370) (0xc001b6d540) Stream removed, broadcasting: 3 I0104 12:27:00.208305 8 log.go:172] (0xc002d5c370) (0xc001726b40) Stream removed, broadcasting: 5 Jan 4 12:27:00.208: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:27:00.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5114" for this suite. Jan 4 12:27:46.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:27:46.446: INFO: namespace e2e-kubelet-etc-hosts-5114 deletion completed in 46.228914722s • [SLOW TEST:75.944 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:27:46.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6f67a2f3-c0e6-44b6-9db1-4d56e6c953c3 STEP: Creating a pod to test consume configMaps Jan 4 12:27:46.728: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42" in namespace "projected-9886" to be "success or failure" Jan 4 12:27:46.737: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 9.138982ms Jan 4 12:27:48.768: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039968032s Jan 4 12:27:50.784: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056308402s Jan 4 12:27:52.806: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078080114s Jan 4 12:27:54.812: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084130689s Jan 4 12:27:56.819: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090465754s Jan 4 12:27:58.829: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.100983464s STEP: Saw pod success Jan 4 12:27:58.829: INFO: Pod "pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42" satisfied condition "success or failure" Jan 4 12:27:58.834: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42 container projected-configmap-volume-test: STEP: delete the pod Jan 4 12:27:58.961: INFO: Waiting for pod pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42 to disappear Jan 4 12:27:58.965: INFO: Pod pod-projected-configmaps-f872669d-c7b8-4e60-a51f-2dbb8077eb42 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:27:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9886" for this suite. Jan 4 12:28:04.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:28:05.111: INFO: namespace projected-9886 deletion completed in 6.140967388s • [SLOW TEST:18.664 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:28:05.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 4 12:28:05.213: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259698,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 12:28:05.213: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259698,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 4 12:28:15.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259714,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 4 12:28:15.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259714,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 4 12:28:25.283: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259728,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 12:28:25.283: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259728,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 4 12:28:35.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259742,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 12:28:35.301: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-a,UID:79ed3eba-7443-4052-8f55-efd1e53ee429,ResourceVersion:19259742,Generation:0,CreationTimestamp:2020-01-04 12:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 4 12:28:45.317: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-b,UID:baf97f21-7bb5-4bfd-9e1a-d8cc31f1b2aa,ResourceVersion:19259756,Generation:0,CreationTimestamp:2020-01-04 12:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 12:28:45.317: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-b,UID:baf97f21-7bb5-4bfd-9e1a-d8cc31f1b2aa,ResourceVersion:19259756,Generation:0,CreationTimestamp:2020-01-04 12:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 4 12:28:55.331: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-b,UID:baf97f21-7bb5-4bfd-9e1a-d8cc31f1b2aa,ResourceVersion:19259770,Generation:0,CreationTimestamp:2020-01-04 12:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 12:28:55.331: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2841,SelfLink:/api/v1/namespaces/watch-2841/configmaps/e2e-watch-test-configmap-b,UID:baf97f21-7bb5-4bfd-9e1a-d8cc31f1b2aa,ResourceVersion:19259770,Generation:0,CreationTimestamp:2020-01-04 12:28:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:29:05.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2841" for this suite. Jan 4 12:29:11.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:29:11.504: INFO: namespace watch-2841 deletion completed in 6.162792436s • [SLOW TEST:66.392 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:29:11.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-fc06d441-0a53-4969-affb-89aa382f7378 in namespace container-probe-5067 Jan 4 12:29:23.689: INFO: Started pod liveness-fc06d441-0a53-4969-affb-89aa382f7378 in namespace container-probe-5067 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 12:29:23.695: INFO: Initial restart count of pod liveness-fc06d441-0a53-4969-affb-89aa382f7378 is 0 Jan 4 12:29:45.806: INFO: Restart count of pod container-probe-5067/liveness-fc06d441-0a53-4969-affb-89aa382f7378 is now 1 (22.11124014s elapsed) Jan 4 12:30:05.966: INFO: Restart count of pod container-probe-5067/liveness-fc06d441-0a53-4969-affb-89aa382f7378 is now 2 (42.271404492s elapsed) Jan 4 12:30:26.214: INFO: Restart count of pod container-probe-5067/liveness-fc06d441-0a53-4969-affb-89aa382f7378 is now 3 (1m2.518991559s elapsed) Jan 4 12:30:49.026: INFO: Restart count of pod container-probe-5067/liveness-fc06d441-0a53-4969-affb-89aa382f7378 is now 4 (1m25.331337244s elapsed) Jan 4 12:31:45.354: INFO: Restart count of pod container-probe-5067/liveness-fc06d441-0a53-4969-affb-89aa382f7378 is now 5 (2m21.658909624s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:31:45.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5067" for this suite. Jan 4 12:31:51.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:31:51.610: INFO: namespace container-probe-5067 deletion completed in 6.133459861s • [SLOW TEST:160.106 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:31:51.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-538dfd4e-2252-4b84-b3f9-51aace4ab4ca STEP: Creating a pod to test consume configMaps Jan 4 12:31:51.781: INFO: Waiting up to 5m0s for pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2" in namespace "configmap-4674" to be "success or failure" Jan 4 12:31:51.791: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025096ms Jan 4 12:31:53.798: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016973207s Jan 4 12:31:55.836: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05505785s Jan 4 12:31:57.844: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063200068s Jan 4 12:31:59.853: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071596023s Jan 4 12:32:01.870: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08906768s Jan 4 12:32:03.889: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.108462612s STEP: Saw pod success Jan 4 12:32:03.890: INFO: Pod "pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2" satisfied condition "success or failure" Jan 4 12:32:03.894: INFO: Trying to get logs from node iruya-node pod pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2 container configmap-volume-test: STEP: delete the pod Jan 4 12:32:04.018: INFO: Waiting for pod pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2 to disappear Jan 4 12:32:04.024: INFO: Pod pod-configmaps-82585a89-1f4f-4d4b-8bd0-9479cb7ff9d2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:32:04.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4674" for this suite. Jan 4 12:32:10.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:32:10.257: INFO: namespace configmap-4674 deletion completed in 6.228836596s • [SLOW TEST:18.647 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:32:10.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 4 12:32:10.533: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6957,SelfLink:/api/v1/namespaces/watch-6957/configmaps/e2e-watch-test-resource-version,UID:c0b3ee76-16f8-42ec-bf92-187881589a98,ResourceVersion:19260119,Generation:0,CreationTimestamp:2020-01-04 12:32:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 12:32:10.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6957,SelfLink:/api/v1/namespaces/watch-6957/configmaps/e2e-watch-test-resource-version,UID:c0b3ee76-16f8-42ec-bf92-187881589a98,ResourceVersion:19260120,Generation:0,CreationTimestamp:2020-01-04 12:32:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:32:10.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6957" for this suite. Jan 4 12:32:16.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:32:16.642: INFO: namespace watch-6957 deletion completed in 6.102808949s • [SLOW TEST:6.384 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:32:16.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9421 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9421 STEP: Creating statefulset with conflicting port in namespace statefulset-9421 STEP: Waiting until pod test-pod will start running in namespace statefulset-9421 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9421 Jan 4 12:32:26.959: INFO: Observed stateful pod in namespace: statefulset-9421, name: ss-0, uid: ac585b17-d30b-49ae-8581-c2fe98e835eb, status phase: Failed. Waiting for statefulset controller to delete. Jan 4 12:32:26.966: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9421 STEP: Removing pod with conflicting port in namespace statefulset-9421 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9421 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 4 12:32:43.176: INFO: Deleting all statefulset in ns statefulset-9421 Jan 4 12:32:43.180: INFO: Scaling statefulset ss to 0 Jan 4 12:33:03.246: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 12:33:03.253: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:33:03.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9421" for this suite. Jan 4 12:33:09.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:33:09.540: INFO: namespace statefulset-9421 deletion completed in 6.248076675s • [SLOW TEST:52.897 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:33:09.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 4 12:33:09.751: INFO: Waiting up to 5m0s for pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8" in namespace "emptydir-6978" to be "success or failure" Jan 4 12:33:09.762: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.993195ms Jan 4 12:33:11.771: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020005537s Jan 4 12:33:13.784: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033618435s Jan 4 12:33:15.799: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048339583s Jan 4 12:33:17.807: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056465991s Jan 4 12:33:19.877: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126548463s Jan 4 12:33:21.884: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.133125986s Jan 4 12:33:23.895: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Running", Reason="", readiness=true. Elapsed: 14.144664054s Jan 4 12:33:25.928: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.176961705s STEP: Saw pod success Jan 4 12:33:25.928: INFO: Pod "pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8" satisfied condition "success or failure" Jan 4 12:33:25.932: INFO: Trying to get logs from node iruya-node pod pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8 container test-container: STEP: delete the pod Jan 4 12:33:25.990: INFO: Waiting for pod pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8 to disappear Jan 4 12:33:25.993: INFO: Pod pod-01b4904d-5b8a-4b1e-b4cb-9b4db76e0be8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:33:25.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6978" for this suite. Jan 4 12:33:32.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:33:32.238: INFO: namespace emptydir-6978 deletion completed in 6.237188864s • [SLOW TEST:22.697 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:33:32.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:33:32.408: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 4 12:33:32.444: INFO: Number of nodes with available pods: 0 Jan 4 12:33:32.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 4 12:33:32.522: INFO: Number of nodes with available pods: 0 Jan 4 12:33:32.522: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:33.528: INFO: Number of nodes with available pods: 0 Jan 4 12:33:33.528: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:34.537: INFO: Number of nodes with available pods: 0 Jan 4 12:33:34.537: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:35.544: INFO: Number of nodes with available pods: 0 Jan 4 12:33:35.544: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:36.539: INFO: Number of nodes with available pods: 0 Jan 4 12:33:36.539: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:37.532: INFO: Number of nodes with available pods: 0 Jan 4 12:33:37.532: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:38.537: INFO: Number of nodes with available pods: 0 Jan 4 12:33:38.537: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:39.546: INFO: Number of nodes with available pods: 0 Jan 4 12:33:39.546: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:40.535: INFO: Number of nodes with available pods: 0 Jan 4 12:33:40.536: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:41.539: INFO: Number of nodes with available pods: 1 Jan 4 12:33:41.539: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 4 12:33:41.697: INFO: Number of nodes with available pods: 1 Jan 4 12:33:41.697: INFO: Number of running nodes: 0, number of available pods: 1 Jan 4 12:33:42.705: INFO: Number of nodes with available pods: 0 Jan 4 12:33:42.705: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 4 12:33:42.730: INFO: Number of nodes with available pods: 0 Jan 4 12:33:42.730: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:43.739: INFO: Number of nodes with available pods: 0 Jan 4 12:33:43.739: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:44.740: INFO: Number of nodes with available pods: 0 Jan 4 12:33:44.740: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:45.741: INFO: Number of nodes with available pods: 0 Jan 4 12:33:45.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:46.740: INFO: Number of nodes with available pods: 0 Jan 4 12:33:46.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:47.741: INFO: Number of nodes with available pods: 0 Jan 4 12:33:47.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:48.740: INFO: Number of nodes with available pods: 0 Jan 4 12:33:48.740: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:49.741: INFO: Number of nodes with available pods: 0 Jan 4 12:33:49.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:50.739: INFO: Number of nodes with available pods: 0 Jan 4 12:33:50.739: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:51.738: INFO: Number of nodes with available pods: 0 Jan 4 12:33:51.738: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:52.739: INFO: Number of nodes with available pods: 0 Jan 4 12:33:52.739: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:53.744: INFO: Number of nodes with available pods: 0 Jan 4 12:33:53.744: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:54.736: INFO: Number of nodes with available pods: 0 Jan 4 12:33:54.736: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:55.742: INFO: Number of nodes with available pods: 0 Jan 4 12:33:55.742: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:56.753: INFO: Number of nodes with available pods: 0 Jan 4 12:33:56.753: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:57.749: INFO: Number of nodes with available pods: 0 Jan 4 12:33:57.749: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:58.741: INFO: Number of nodes with available pods: 0 Jan 4 12:33:58.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:33:59.744: INFO: Number of nodes with available pods: 0 Jan 4 12:33:59.744: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:00.744: INFO: Number of nodes with available pods: 0 Jan 4 12:34:00.744: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:01.744: INFO: Number of nodes with available pods: 0 Jan 4 12:34:01.744: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:02.741: INFO: Number of nodes with available pods: 0 Jan 4 12:34:02.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:03.742: INFO: Number of nodes with available pods: 0 Jan 4 12:34:03.742: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:04.762: INFO: Number of nodes with available pods: 0 Jan 4 12:34:04.762: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:05.738: INFO: Number of nodes with available pods: 0 Jan 4 12:34:05.738: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:06.740: INFO: Number of nodes with available pods: 0 Jan 4 12:34:06.740: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:07.741: INFO: Number of nodes with available pods: 0 Jan 4 12:34:07.741: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:08.745: INFO: Number of nodes with available pods: 0 Jan 4 12:34:08.745: INFO: Node iruya-node is running more than one daemon pod Jan 4 12:34:09.740: INFO: Number of nodes with available pods: 1 Jan 4 12:34:09.740: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7529, will wait for the garbage collector to delete the pods Jan 4 12:34:09.820: INFO: Deleting DaemonSet.extensions daemon-set took: 16.180839ms Jan 4 12:34:10.121: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.557891ms Jan 4 12:34:26.643: INFO: Number of nodes with available pods: 0 Jan 4 12:34:26.643: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 12:34:26.647: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7529/daemonsets","resourceVersion":"19260493"},"items":null} Jan 4 12:34:26.649: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7529/pods","resourceVersion":"19260493"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:34:26.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7529" for this suite. Jan 4 12:34:32.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:34:32.879: INFO: namespace daemonsets-7529 deletion completed in 6.18362141s • [SLOW TEST:60.641 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:34:32.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:34:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3798" for this suite. Jan 4 12:34:45.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:34:45.647: INFO: namespace namespaces-3798 deletion completed in 6.141287449s STEP: Destroying namespace "nsdeletetest-4091" for this suite. Jan 4 12:34:45.649: INFO: Namespace nsdeletetest-4091 was already deleted STEP: Destroying namespace "nsdeletetest-5938" for this suite. Jan 4 12:34:51.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:34:51.842: INFO: namespace nsdeletetest-5938 deletion completed in 6.19333942s • [SLOW TEST:18.963 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:34:51.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:34:51.930: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 4 12:34:54.804: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:34:55.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-44" for this suite. Jan 4 12:35:07.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:35:07.488: INFO: namespace replication-controller-44 deletion completed in 12.279737597s • [SLOW TEST:15.646 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:35:07.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 4 12:35:07.634: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:35:07.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4545" for this suite. Jan 4 12:35:13.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:35:13.909: INFO: namespace kubectl-4545 deletion completed in 6.166093325s • [SLOW TEST:6.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:35:13.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 4 12:35:23.230: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:35:23.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6733" for this suite. Jan 4 12:35:47.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:35:47.543: INFO: namespace replicaset-6733 deletion completed in 24.229043303s • [SLOW TEST:33.633 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:35:47.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 4 12:35:47.612: INFO: Waiting up to 5m0s for pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8" in namespace "downward-api-3216" to be "success or failure" Jan 4 12:35:47.622: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115314ms Jan 4 12:35:49.656: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044506252s Jan 4 12:35:51.664: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052228714s Jan 4 12:35:53.677: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065424594s Jan 4 12:35:55.692: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079894254s Jan 4 12:35:57.706: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094251877s STEP: Saw pod success Jan 4 12:35:57.706: INFO: Pod "downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8" satisfied condition "success or failure" Jan 4 12:35:57.713: INFO: Trying to get logs from node iruya-node pod downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8 container dapi-container: STEP: delete the pod Jan 4 12:35:57.795: INFO: Waiting for pod downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8 to disappear Jan 4 12:35:57.905: INFO: Pod downward-api-66c265df-a98f-4377-8e15-c4b54d1078c8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:35:57.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3216" for this suite. Jan 4 12:36:03.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:36:04.053: INFO: namespace downward-api-3216 deletion completed in 6.137083394s • [SLOW TEST:16.510 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:36:04.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f6e52962-db76-440d-9af4-c80a3c83c994 STEP: Creating a pod to test consume configMaps Jan 4 12:36:04.215: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d" in namespace "projected-7235" to be "success or failure" Jan 4 12:36:04.219: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572716ms Jan 4 12:36:06.229: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013519204s Jan 4 12:36:08.236: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020767115s Jan 4 12:36:10.246: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030231668s Jan 4 12:36:12.252: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036263937s Jan 4 12:36:14.267: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051894013s STEP: Saw pod success Jan 4 12:36:14.268: INFO: Pod "pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d" satisfied condition "success or failure" Jan 4 12:36:14.279: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d container projected-configmap-volume-test: STEP: delete the pod Jan 4 12:36:14.434: INFO: Waiting for pod pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d to disappear Jan 4 12:36:14.440: INFO: Pod pod-projected-configmaps-d7c10b5a-1020-4c12-bcb6-65c686896e3d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:36:14.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7235" for this suite. Jan 4 12:36:20.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:36:20.615: INFO: namespace projected-7235 deletion completed in 6.169700527s • [SLOW TEST:16.561 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:36:20.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-g7jq STEP: Creating a pod to test atomic-volume-subpath Jan 4 12:36:20.803: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-g7jq" in namespace "subpath-1300" to be "success or failure" Jan 4 12:36:20.822: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.656764ms Jan 4 12:36:22.832: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028381739s Jan 4 12:36:24.837: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033624074s Jan 4 12:36:26.844: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040366381s Jan 4 12:36:28.855: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052268211s Jan 4 12:36:30.868: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 10.065061973s Jan 4 12:36:32.885: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 12.081648811s Jan 4 12:36:34.897: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 14.093428331s Jan 4 12:36:36.906: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 16.102582023s Jan 4 12:36:38.914: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 18.110356795s Jan 4 12:36:40.920: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 20.116440914s Jan 4 12:36:42.932: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 22.129001861s Jan 4 12:36:44.945: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 24.142050338s Jan 4 12:36:46.957: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 26.153786992s Jan 4 12:36:48.970: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Running", Reason="", readiness=true. Elapsed: 28.167029514s Jan 4 12:36:51.019: INFO: Pod "pod-subpath-test-downwardapi-g7jq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.215699742s STEP: Saw pod success Jan 4 12:36:51.019: INFO: Pod "pod-subpath-test-downwardapi-g7jq" satisfied condition "success or failure" Jan 4 12:36:51.053: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-g7jq container test-container-subpath-downwardapi-g7jq: STEP: delete the pod Jan 4 12:36:51.270: INFO: Waiting for pod pod-subpath-test-downwardapi-g7jq to disappear Jan 4 12:36:51.301: INFO: Pod pod-subpath-test-downwardapi-g7jq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-g7jq Jan 4 12:36:51.301: INFO: Deleting pod "pod-subpath-test-downwardapi-g7jq" in namespace "subpath-1300" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:36:51.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1300" for this suite. Jan 4 12:36:57.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:36:57.622: INFO: namespace subpath-1300 deletion completed in 6.207872964s • [SLOW TEST:37.007 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:36:57.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 12:36:57.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd" in namespace "downward-api-4195" to be "success or failure" Jan 4 12:36:57.927: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Pending", Reason="", readiness=false. Elapsed: 62.092227ms Jan 4 12:36:59.934: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068900959s Jan 4 12:37:01.942: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076570852s Jan 4 12:37:03.966: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100751553s Jan 4 12:37:05.981: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115816228s Jan 4 12:37:07.987: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122027525s STEP: Saw pod success Jan 4 12:37:07.987: INFO: Pod "downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd" satisfied condition "success or failure" Jan 4 12:37:07.991: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd container client-container: STEP: delete the pod Jan 4 12:37:08.291: INFO: Waiting for pod downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd to disappear Jan 4 12:37:08.318: INFO: Pod downwardapi-volume-f363d224-5eb3-4317-bd34-9ef3ea452acd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:37:08.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4195" for this suite. Jan 4 12:37:14.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:37:14.585: INFO: namespace downward-api-4195 deletion completed in 6.141919321s • [SLOW TEST:16.963 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:37:14.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jan 4 12:37:14.780: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2216" to be "success or failure" Jan 4 12:37:14.788: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036804ms Jan 4 12:37:16.800: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01939986s Jan 4 12:37:18.813: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032908009s Jan 4 12:37:20.825: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0444986s Jan 4 12:37:22.831: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050595053s Jan 4 12:37:24.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059873808s Jan 4 12:37:26.849: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068274785s STEP: Saw pod success Jan 4 12:37:26.849: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 4 12:37:26.854: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 4 12:37:26.986: INFO: Waiting for pod pod-host-path-test to disappear Jan 4 12:37:26.995: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:37:26.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2216" for this suite. Jan 4 12:37:33.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:37:33.147: INFO: namespace hostpath-2216 deletion completed in 6.144681038s • [SLOW TEST:18.562 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:37:33.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 4 12:37:33.293: INFO: Waiting up to 5m0s for pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055" in namespace "var-expansion-7064" to be "success or failure" Jan 4 12:37:33.299: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Pending", Reason="", readiness=false. Elapsed: 5.923107ms Jan 4 12:37:35.305: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012116858s Jan 4 12:37:37.312: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01818406s Jan 4 12:37:39.318: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024302665s Jan 4 12:37:41.329: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035572177s Jan 4 12:37:43.334: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Running", Reason="", readiness=true. Elapsed: 10.041120398s Jan 4 12:37:45.340: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.046214691s STEP: Saw pod success Jan 4 12:37:45.340: INFO: Pod "var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055" satisfied condition "success or failure" Jan 4 12:37:45.343: INFO: Trying to get logs from node iruya-node pod var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055 container dapi-container: STEP: delete the pod Jan 4 12:37:45.406: INFO: Waiting for pod var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055 to disappear Jan 4 12:37:45.464: INFO: Pod var-expansion-ffeb4483-b7f1-4313-b970-f6cc2d9d4055 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:37:45.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7064" for this suite. Jan 4 12:37:51.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:37:51.640: INFO: namespace var-expansion-7064 deletion completed in 6.170644935s • [SLOW TEST:18.492 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:37:51.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6762 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6762 to expose endpoints map[] Jan 4 12:37:51.959: INFO: Get endpoints failed (11.881271ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 4 12:37:52.968: INFO: successfully validated that service endpoint-test2 in namespace services-6762 exposes endpoints map[] (1.020632962s elapsed) STEP: Creating pod pod1 in namespace services-6762 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6762 to expose endpoints map[pod1:[80]] Jan 4 12:37:57.367: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.369129978s elapsed, will retry) Jan 4 12:38:01.414: INFO: successfully validated that service endpoint-test2 in namespace services-6762 exposes endpoints map[pod1:[80]] (8.416084605s elapsed) STEP: Creating pod pod2 in namespace services-6762 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6762 to expose endpoints map[pod1:[80] pod2:[80]] Jan 4 12:38:05.966: INFO: Unexpected endpoints: found map[f44c4e7f-b8a5-449c-89df-400dadfaee47:[80]], expected map[pod1:[80] pod2:[80]] (4.541346779s elapsed, will retry) Jan 4 12:38:09.013: INFO: successfully validated that service endpoint-test2 in namespace services-6762 exposes endpoints map[pod1:[80] pod2:[80]] (7.588221065s elapsed) STEP: Deleting pod pod1 in namespace services-6762 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6762 to expose endpoints map[pod2:[80]] Jan 4 12:38:10.049: INFO: successfully validated that service endpoint-test2 in namespace services-6762 exposes endpoints map[pod2:[80]] (1.030069303s elapsed) STEP: Deleting pod pod2 in namespace services-6762 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6762 to expose endpoints map[] Jan 4 12:38:11.071: INFO: successfully validated that service endpoint-test2 in namespace services-6762 exposes endpoints map[] (1.015534261s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:38:13.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6762" for this suite. Jan 4 12:38:35.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:38:35.233: INFO: namespace services-6762 deletion completed in 22.095761509s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.593 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:38:35.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d Jan 4 12:38:35.415: INFO: Pod name my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d: Found 0 pods out of 1 Jan 4 12:38:40.427: INFO: Pod name my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d: Found 1 pods out of 1 Jan 4 12:38:40.427: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d" are running Jan 4 12:38:48.447: INFO: Pod "my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d-cq5d4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:38:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:38:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:38:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:38:35 +0000 UTC Reason: Message:}]) Jan 4 12:38:48.448: INFO: Trying to dial the pod Jan 4 12:38:53.491: INFO: Controller my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d: Got expected result from replica 1 [my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d-cq5d4]: "my-hostname-basic-bb3851d4-5b78-4e7d-b258-73c3ebdb231d-cq5d4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:38:53.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2213" for this suite. Jan 4 12:38:59.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:38:59.742: INFO: namespace replication-controller-2213 deletion completed in 6.243221503s • [SLOW TEST:24.509 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:38:59.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 4 12:38:59.918: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 4 12:38:59.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:03.410: INFO: stderr: "" Jan 4 12:39:03.410: INFO: stdout: "service/redis-slave created\n" Jan 4 12:39:03.411: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 4 12:39:03.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:03.959: INFO: stderr: "" Jan 4 12:39:03.960: INFO: stdout: "service/redis-master created\n" Jan 4 12:39:03.960: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 4 12:39:03.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:04.565: INFO: stderr: "" Jan 4 12:39:04.565: INFO: stdout: "service/frontend created\n" Jan 4 12:39:04.565: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 4 12:39:04.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:04.880: INFO: stderr: "" Jan 4 12:39:04.880: INFO: stdout: "deployment.apps/frontend created\n" Jan 4 12:39:04.880: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 12:39:04.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:05.489: INFO: stderr: "" Jan 4 12:39:05.489: INFO: stdout: "deployment.apps/redis-master created\n" Jan 4 12:39:05.490: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 4 12:39:05.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4232' Jan 4 12:39:07.088: INFO: stderr: "" Jan 4 12:39:07.088: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 4 12:39:07.088: INFO: Waiting for all frontend pods to be Running. Jan 4 12:39:32.140: INFO: Waiting for frontend to serve content. Jan 4 12:39:32.251: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: Jan 4 12:39:37.296: INFO: Trying to add a new entry to the guestbook. Jan 4 12:39:37.452: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 4 12:39:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:37.787: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:37.787: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:39:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:38.172: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:38.172: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:39:38.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:38.481: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:38.481: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:39:38.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:38.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:38.631: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:39:38.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:38.737: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:38.737: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:39:38.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4232' Jan 4 12:39:38.809: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:39:38.809: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:39:38.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4232" for this suite. Jan 4 12:40:18.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:40:19.065: INFO: namespace kubectl-4232 deletion completed in 40.249273363s • [SLOW TEST:79.323 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:40:19.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 4 12:40:19.107: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 4 12:40:19.738: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 4 12:40:22.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:40:24.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:40:26.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:40:28.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:40:30.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738419, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:40:35.674: INFO: Waited 3.645745626s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:40:36.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6230" for this suite. Jan 4 12:40:42.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:40:42.716: INFO: namespace aggregator-6230 deletion completed in 6.146103326s • [SLOW TEST:23.650 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:40:42.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-069399b6-db25-441c-9402-55c9e2df7460 STEP: Creating a pod to test consume configMaps Jan 4 12:40:42.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67" in namespace "projected-260" to be "success or failure" Jan 4 12:40:42.921: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027497ms Jan 4 12:40:45.224: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306797444s Jan 4 12:40:47.233: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315546917s Jan 4 12:40:49.241: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323556128s Jan 4 12:40:51.251: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.33423617s STEP: Saw pod success Jan 4 12:40:51.251: INFO: Pod "pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67" satisfied condition "success or failure" Jan 4 12:40:51.254: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67 container projected-configmap-volume-test: STEP: delete the pod Jan 4 12:40:51.332: INFO: Waiting for pod pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67 to disappear Jan 4 12:40:51.394: INFO: Pod pod-projected-configmaps-30c04bf7-d90f-431b-b293-658ec1779e67 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:40:51.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-260" for this suite. Jan 4 12:40:57.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:40:57.562: INFO: namespace projected-260 deletion completed in 6.163240185s • [SLOW TEST:14.846 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:40:57.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 4 12:44:02.106: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:02.149: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:04.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:04.156: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:06.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:06.157: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:08.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:08.158: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:10.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:10.162: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:12.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:12.163: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:14.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:14.158: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:16.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:16.156: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:18.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:18.158: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:20.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:20.164: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:22.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:22.160: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:24.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:24.157: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:26.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:26.832: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:28.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:28.161: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:30.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:30.160: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:32.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:32.158: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:34.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:34.173: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:36.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:36.170: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:44:38.150: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:44:38.157: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:44:38.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4037" for this suite. Jan 4 12:45:00.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:45:00.327: INFO: namespace container-lifecycle-hook-4037 deletion completed in 22.163679165s • [SLOW TEST:242.764 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:45:00.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 12:45:00.441: INFO: Waiting up to 5m0s for pod "pod-f71b1552-b324-472c-8d53-7826a39281e6" in namespace "emptydir-8878" to be "success or failure" Jan 4 12:45:00.447: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603085ms Jan 4 12:45:02.457: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016141778s Jan 4 12:45:04.465: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02389034s Jan 4 12:45:06.482: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040874272s Jan 4 12:45:08.497: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056535743s Jan 4 12:45:10.510: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068892905s STEP: Saw pod success Jan 4 12:45:10.510: INFO: Pod "pod-f71b1552-b324-472c-8d53-7826a39281e6" satisfied condition "success or failure" Jan 4 12:45:10.518: INFO: Trying to get logs from node iruya-node pod pod-f71b1552-b324-472c-8d53-7826a39281e6 container test-container: STEP: delete the pod Jan 4 12:45:10.596: INFO: Waiting for pod pod-f71b1552-b324-472c-8d53-7826a39281e6 to disappear Jan 4 12:45:10.610: INFO: Pod pod-f71b1552-b324-472c-8d53-7826a39281e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:45:10.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8878" for this suite. Jan 4 12:45:16.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:45:16.717: INFO: namespace emptydir-8878 deletion completed in 6.101211201s • [SLOW TEST:16.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:45:16.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 12:45:16.987: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187" in namespace "downward-api-5004" to be "success or failure" Jan 4 12:45:17.085: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Pending", Reason="", readiness=false. Elapsed: 98.421266ms Jan 4 12:45:19.094: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107289635s Jan 4 12:45:21.104: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117691416s Jan 4 12:45:23.115: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127806565s Jan 4 12:45:25.124: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136988478s Jan 4 12:45:27.134: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147176181s STEP: Saw pod success Jan 4 12:45:27.134: INFO: Pod "downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187" satisfied condition "success or failure" Jan 4 12:45:27.138: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187 container client-container: STEP: delete the pod Jan 4 12:45:27.426: INFO: Waiting for pod downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187 to disappear Jan 4 12:45:27.438: INFO: Pod downwardapi-volume-2de3383a-bee9-4d9d-92de-846a4bdfe187 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:45:27.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5004" for this suite. Jan 4 12:45:33.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:45:33.730: INFO: namespace downward-api-5004 deletion completed in 6.284886034s • [SLOW TEST:17.012 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:45:33.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 4 12:45:42.465: INFO: Successfully updated pod "pod-update-3c2851e9-3c46-4dc0-a9e2-753320f6ffb8" STEP: verifying the updated pod is in kubernetes Jan 4 12:45:42.510: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:45:42.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8609" for this suite. Jan 4 12:46:04.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:46:04.635: INFO: namespace pods-8609 deletion completed in 22.118907203s • [SLOW TEST:30.905 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:46:04.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 4 12:46:04.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6448' Jan 4 12:46:05.194: INFO: stderr: "" Jan 4 12:46:05.194: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 4 12:46:06.206: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:06.206: INFO: Found 0 / 1 Jan 4 12:46:07.208: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:07.208: INFO: Found 0 / 1 Jan 4 12:46:08.216: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:08.216: INFO: Found 0 / 1 Jan 4 12:46:09.202: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:09.202: INFO: Found 0 / 1 Jan 4 12:46:10.202: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:10.202: INFO: Found 0 / 1 Jan 4 12:46:11.202: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:11.202: INFO: Found 0 / 1 Jan 4 12:46:12.201: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:12.202: INFO: Found 0 / 1 Jan 4 12:46:13.210: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:13.210: INFO: Found 0 / 1 Jan 4 12:46:14.235: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:14.235: INFO: Found 0 / 1 Jan 4 12:46:15.204: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:15.204: INFO: Found 1 / 1 Jan 4 12:46:15.204: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 4 12:46:15.209: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:15.209: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 4 12:46:15.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5w8mk --namespace=kubectl-6448 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 4 12:46:15.395: INFO: stderr: "" Jan 4 12:46:15.395: INFO: stdout: "pod/redis-master-5w8mk patched\n" STEP: checking annotations Jan 4 12:46:15.426: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:46:15.426: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:46:15.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6448" for this suite. Jan 4 12:46:37.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:46:37.587: INFO: namespace kubectl-6448 deletion completed in 22.155325267s • [SLOW TEST:32.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:46:37.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-48eedc9e-065b-408e-80ef-0683f11f8672 in namespace container-probe-2493 Jan 4 12:46:47.778: INFO: Started pod liveness-48eedc9e-065b-408e-80ef-0683f11f8672 in namespace container-probe-2493 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 12:46:47.784: INFO: Initial restart count of pod liveness-48eedc9e-065b-408e-80ef-0683f11f8672 is 0 Jan 4 12:47:07.924: INFO: Restart count of pod container-probe-2493/liveness-48eedc9e-065b-408e-80ef-0683f11f8672 is now 1 (20.139641358s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:47:07.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2493" for this suite. Jan 4 12:47:14.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:47:14.163: INFO: namespace container-probe-2493 deletion completed in 6.198763404s • [SLOW TEST:36.576 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:47:14.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-2ecdb948-b50d-4746-820c-2222da3c7949 STEP: Creating a pod to test consume secrets Jan 4 12:47:14.294: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d" in namespace "projected-4904" to be "success or failure" Jan 4 12:47:14.309: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.037214ms Jan 4 12:47:16.316: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022408163s Jan 4 12:47:18.337: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043234637s Jan 4 12:47:20.342: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047788241s Jan 4 12:47:22.348: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05442881s STEP: Saw pod success Jan 4 12:47:22.348: INFO: Pod "pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d" satisfied condition "success or failure" Jan 4 12:47:22.353: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d container projected-secret-volume-test: STEP: delete the pod Jan 4 12:47:22.406: INFO: Waiting for pod pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d to disappear Jan 4 12:47:22.511: INFO: Pod pod-projected-secrets-4a223e88-b9de-4716-80b9-7281c7e5405d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:47:22.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4904" for this suite. Jan 4 12:47:28.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:47:28.654: INFO: namespace projected-4904 deletion completed in 6.13696712s • [SLOW TEST:14.491 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:47:28.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:47:36.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-921" for this suite. Jan 4 12:48:18.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:48:19.005: INFO: namespace kubelet-test-921 deletion completed in 42.184293812s • [SLOW TEST:50.351 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:48:19.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 4 12:48:19.061: INFO: namespace kubectl-8246 Jan 4 12:48:19.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8246' Jan 4 12:48:19.349: INFO: stderr: "" Jan 4 12:48:19.349: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 4 12:48:20.359: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:20.359: INFO: Found 0 / 1 Jan 4 12:48:21.357: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:21.357: INFO: Found 0 / 1 Jan 4 12:48:22.362: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:22.362: INFO: Found 0 / 1 Jan 4 12:48:23.364: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:23.364: INFO: Found 0 / 1 Jan 4 12:48:24.355: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:24.355: INFO: Found 0 / 1 Jan 4 12:48:25.363: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:25.363: INFO: Found 0 / 1 Jan 4 12:48:26.368: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:26.368: INFO: Found 1 / 1 Jan 4 12:48:26.368: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 4 12:48:26.372: INFO: Selector matched 1 pods for map[app:redis] Jan 4 12:48:26.372: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 4 12:48:26.372: INFO: wait on redis-master startup in kubectl-8246 Jan 4 12:48:26.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kvvgp redis-master --namespace=kubectl-8246' Jan 4 12:48:26.611: INFO: stderr: "" Jan 4 12:48:26.611: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jan 12:48:26.096 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 12:48:26.097 # Server started, Redis version 3.2.12\n1:M 04 Jan 12:48:26.097 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 12:48:26.097 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 4 12:48:26.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8246' Jan 4 12:48:26.776: INFO: stderr: "" Jan 4 12:48:26.776: INFO: stdout: "service/rm2 exposed\n" Jan 4 12:48:26.783: INFO: Service rm2 in namespace kubectl-8246 found. STEP: exposing service Jan 4 12:48:28.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8246' Jan 4 12:48:28.954: INFO: stderr: "" Jan 4 12:48:28.954: INFO: stdout: "service/rm3 exposed\n" Jan 4 12:48:28.962: INFO: Service rm3 in namespace kubectl-8246 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:48:30.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8246" for this suite. Jan 4 12:48:53.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:48:53.445: INFO: namespace kubectl-8246 deletion completed in 22.468465438s • [SLOW TEST:34.440 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:48:53.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:48:53.521: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 4 12:48:53.585: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 4 12:48:58.592: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 12:49:02.607: INFO: Creating deployment "test-rolling-update-deployment" Jan 4 12:49:02.617: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 4 12:49:02.652: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 4 12:49:04.668: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 4 12:49:04.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:49:06.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:49:08.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713738942, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:49:10.681: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 12:49:10.695: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1142,SelfLink:/apis/apps/v1/namespaces/deployment-1142/deployments/test-rolling-update-deployment,UID:ab8e05a7-472c-453d-a45a-77f3c138d1bd,ResourceVersion:19262672,Generation:1,CreationTimestamp:2020-01-04 12:49:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 12:49:02 +0000 UTC 2020-01-04 12:49:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 12:49:10 +0000 UTC 2020-01-04 12:49:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 4 12:49:10.700: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1142,SelfLink:/apis/apps/v1/namespaces/deployment-1142/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:68fa687e-ddcc-4c38-a1e6-58bee4fec350,ResourceVersion:19262660,Generation:1,CreationTimestamp:2020-01-04 12:49:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ab8e05a7-472c-453d-a45a-77f3c138d1bd 0xc002e24b27 0xc002e24b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 4 12:49:10.700: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 4 12:49:10.701: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1142,SelfLink:/apis/apps/v1/namespaces/deployment-1142/replicasets/test-rolling-update-controller,UID:208f3194-edad-495d-9c3b-7c68ea7620f4,ResourceVersion:19262670,Generation:2,CreationTimestamp:2020-01-04 12:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ab8e05a7-472c-453d-a45a-77f3c138d1bd 0xc002e24a57 0xc002e24a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 12:49:10.710: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-cnv2n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-cnv2n,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1142,SelfLink:/api/v1/namespaces/deployment-1142/pods/test-rolling-update-deployment-79f6b9d75c-cnv2n,UID:84ea8e42-1da6-4366-9f64-943c111e4e10,ResourceVersion:19262659,Generation:0,CreationTimestamp:2020-01-04 12:49:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 68fa687e-ddcc-4c38-a1e6-58bee4fec350 0xc002e25437 0xc002e25438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ws78k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ws78k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ws78k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e254b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e254d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:49:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:49:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:49:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:49:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 12:49:02 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 12:49:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f88453e48650361a3de7c9a7669014ac216d7828818c4c26179af30ba355a7c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:49:10.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1142" for this suite. Jan 4 12:49:16.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:49:16.886: INFO: namespace deployment-1142 deletion completed in 6.16831731s • [SLOW TEST:23.440 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:49:16.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4227 I0104 12:49:17.099874 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4227, replica count: 1 I0104 12:49:18.150413 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:19.150700 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:20.151029 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:21.151267 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:22.151677 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:23.151980 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:24.152184 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:25.152477 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:26.152782 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 12:49:27.153080 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 12:49:27.316: INFO: Created: latency-svc-w4wcm Jan 4 12:49:27.343: INFO: Got endpoints: latency-svc-w4wcm [89.700761ms] Jan 4 12:49:27.469: INFO: Created: latency-svc-db55n Jan 4 12:49:27.475: INFO: Got endpoints: latency-svc-db55n [131.942811ms] Jan 4 12:49:27.512: INFO: Created: latency-svc-n55fp Jan 4 12:49:27.631: INFO: Got endpoints: latency-svc-n55fp [287.96078ms] Jan 4 12:49:27.646: INFO: Created: latency-svc-f4ww9 Jan 4 12:49:27.656: INFO: Got endpoints: latency-svc-f4ww9 [312.52771ms] Jan 4 12:49:27.694: INFO: Created: latency-svc-t9xd5 Jan 4 12:49:27.730: INFO: Got endpoints: latency-svc-t9xd5 [385.839151ms] Jan 4 12:49:27.736: INFO: Created: latency-svc-5ckjk Jan 4 12:49:27.833: INFO: Got endpoints: latency-svc-5ckjk [489.115827ms] Jan 4 12:49:27.886: INFO: Created: latency-svc-vg5rx Jan 4 12:49:28.010: INFO: Got endpoints: latency-svc-vg5rx [666.798599ms] Jan 4 12:49:28.011: INFO: Created: latency-svc-7zgpg Jan 4 12:49:28.019: INFO: Got endpoints: latency-svc-7zgpg [675.205202ms] Jan 4 12:49:28.059: INFO: Created: latency-svc-fv952 Jan 4 12:49:28.064: INFO: Got endpoints: latency-svc-fv952 [720.274788ms] Jan 4 12:49:28.091: INFO: Created: latency-svc-xhwq4 Jan 4 12:49:28.099: INFO: Got endpoints: latency-svc-xhwq4 [755.521458ms] Jan 4 12:49:28.216: INFO: Created: latency-svc-d5pmm Jan 4 12:49:28.266: INFO: Got endpoints: latency-svc-d5pmm [922.219829ms] Jan 4 12:49:28.279: INFO: Created: latency-svc-hz4bv Jan 4 12:49:28.283: INFO: Got endpoints: latency-svc-hz4bv [938.847067ms] Jan 4 12:49:28.378: INFO: Created: latency-svc-k4rzl Jan 4 12:49:28.395: INFO: Got endpoints: latency-svc-k4rzl [1.051038944s] Jan 4 12:49:28.422: INFO: Created: latency-svc-rjtxt Jan 4 12:49:28.428: INFO: Got endpoints: latency-svc-rjtxt [1.083933962s] Jan 4 12:49:28.461: INFO: Created: latency-svc-wr2d4 Jan 4 12:49:28.470: INFO: Got endpoints: latency-svc-wr2d4 [1.126120522s] Jan 4 12:49:28.587: INFO: Created: latency-svc-l9j5v Jan 4 12:49:28.625: INFO: Got endpoints: latency-svc-l9j5v [1.28080801s] Jan 4 12:49:28.628: INFO: Created: latency-svc-lszv4 Jan 4 12:49:28.645: INFO: Got endpoints: latency-svc-lszv4 [1.169328656s] Jan 4 12:49:28.773: INFO: Created: latency-svc-qtxkn Jan 4 12:49:28.792: INFO: Created: latency-svc-pbmqt Jan 4 12:49:28.798: INFO: Got endpoints: latency-svc-qtxkn [1.166819075s] Jan 4 12:49:28.818: INFO: Got endpoints: latency-svc-pbmqt [1.161498665s] Jan 4 12:49:28.846: INFO: Created: latency-svc-mz7bn Jan 4 12:49:28.927: INFO: Got endpoints: latency-svc-mz7bn [1.197145478s] Jan 4 12:49:28.934: INFO: Created: latency-svc-pkk5b Jan 4 12:49:28.936: INFO: Got endpoints: latency-svc-pkk5b [1.102760094s] Jan 4 12:49:28.987: INFO: Created: latency-svc-4mpzt Jan 4 12:49:28.998: INFO: Got endpoints: latency-svc-4mpzt [987.06901ms] Jan 4 12:49:29.267: INFO: Created: latency-svc-pghb2 Jan 4 12:49:29.368: INFO: Got endpoints: latency-svc-pghb2 [1.348984142s] Jan 4 12:49:29.477: INFO: Created: latency-svc-h8j99 Jan 4 12:49:29.496: INFO: Got endpoints: latency-svc-h8j99 [1.431933389s] Jan 4 12:49:29.521: INFO: Created: latency-svc-bd9kl Jan 4 12:49:29.565: INFO: Got endpoints: latency-svc-bd9kl [1.465531516s] Jan 4 12:49:29.664: INFO: Created: latency-svc-fnsqp Jan 4 12:49:29.670: INFO: Got endpoints: latency-svc-fnsqp [1.404120312s] Jan 4 12:49:29.721: INFO: Created: latency-svc-795zb Jan 4 12:49:29.731: INFO: Got endpoints: latency-svc-795zb [1.448069548s] Jan 4 12:49:29.844: INFO: Created: latency-svc-zd5wv Jan 4 12:49:29.848: INFO: Got endpoints: latency-svc-zd5wv [1.452647651s] Jan 4 12:49:29.925: INFO: Created: latency-svc-zfn4n Jan 4 12:49:30.005: INFO: Got endpoints: latency-svc-zfn4n [1.57748265s] Jan 4 12:49:30.016: INFO: Created: latency-svc-mq255 Jan 4 12:49:30.052: INFO: Got endpoints: latency-svc-mq255 [1.581284448s] Jan 4 12:49:30.096: INFO: Created: latency-svc-h8xp2 Jan 4 12:49:30.191: INFO: Got endpoints: latency-svc-h8xp2 [1.566261939s] Jan 4 12:49:30.195: INFO: Created: latency-svc-qdggl Jan 4 12:49:30.230: INFO: Got endpoints: latency-svc-qdggl [1.585611728s] Jan 4 12:49:30.341: INFO: Created: latency-svc-46n6s Jan 4 12:49:30.365: INFO: Created: latency-svc-cddhl Jan 4 12:49:30.371: INFO: Got endpoints: latency-svc-46n6s [1.572897307s] Jan 4 12:49:30.374: INFO: Got endpoints: latency-svc-cddhl [1.5557689s] Jan 4 12:49:30.424: INFO: Created: latency-svc-qrzv2 Jan 4 12:49:30.496: INFO: Got endpoints: latency-svc-qrzv2 [1.569237811s] Jan 4 12:49:30.548: INFO: Created: latency-svc-j2fcn Jan 4 12:49:30.558: INFO: Got endpoints: latency-svc-j2fcn [1.622127112s] Jan 4 12:49:30.654: INFO: Created: latency-svc-fd5s6 Jan 4 12:49:30.673: INFO: Got endpoints: latency-svc-fd5s6 [1.674836075s] Jan 4 12:49:30.696: INFO: Created: latency-svc-l8h4f Jan 4 12:49:30.707: INFO: Got endpoints: latency-svc-l8h4f [1.339157735s] Jan 4 12:49:30.749: INFO: Created: latency-svc-qsbjf Jan 4 12:49:30.834: INFO: Created: latency-svc-cklgt Jan 4 12:49:30.835: INFO: Got endpoints: latency-svc-qsbjf [161.537984ms] Jan 4 12:49:30.849: INFO: Got endpoints: latency-svc-cklgt [1.35301809s] Jan 4 12:49:31.034: INFO: Created: latency-svc-rdb4x Jan 4 12:49:31.068: INFO: Got endpoints: latency-svc-rdb4x [1.503070535s] Jan 4 12:49:31.132: INFO: Created: latency-svc-k2tzz Jan 4 12:49:31.239: INFO: Got endpoints: latency-svc-k2tzz [1.568213744s] Jan 4 12:49:31.287: INFO: Created: latency-svc-5cmcl Jan 4 12:49:31.317: INFO: Got endpoints: latency-svc-5cmcl [1.585717698s] Jan 4 12:49:31.499: INFO: Created: latency-svc-lbjvh Jan 4 12:49:31.513: INFO: Got endpoints: latency-svc-lbjvh [1.665517249s] Jan 4 12:49:31.712: INFO: Created: latency-svc-r54q5 Jan 4 12:49:31.721: INFO: Got endpoints: latency-svc-r54q5 [1.71539025s] Jan 4 12:49:31.804: INFO: Created: latency-svc-4dvmj Jan 4 12:49:31.905: INFO: Got endpoints: latency-svc-4dvmj [1.852968109s] Jan 4 12:49:31.939: INFO: Created: latency-svc-77cms Jan 4 12:49:31.948: INFO: Got endpoints: latency-svc-77cms [1.755860976s] Jan 4 12:49:31.982: INFO: Created: latency-svc-l6x75 Jan 4 12:49:31.991: INFO: Got endpoints: latency-svc-l6x75 [1.760364755s] Jan 4 12:49:32.089: INFO: Created: latency-svc-6lkhl Jan 4 12:49:32.098: INFO: Got endpoints: latency-svc-6lkhl [1.726267592s] Jan 4 12:49:32.141: INFO: Created: latency-svc-5d9q5 Jan 4 12:49:32.149: INFO: Got endpoints: latency-svc-5d9q5 [1.775473728s] Jan 4 12:49:32.236: INFO: Created: latency-svc-snz76 Jan 4 12:49:32.243: INFO: Got endpoints: latency-svc-snz76 [1.746540199s] Jan 4 12:49:32.308: INFO: Created: latency-svc-b2ccz Jan 4 12:49:32.329: INFO: Got endpoints: latency-svc-b2ccz [1.771372876s] Jan 4 12:49:32.430: INFO: Created: latency-svc-4r5pw Jan 4 12:49:32.453: INFO: Got endpoints: latency-svc-4r5pw [1.74591014s] Jan 4 12:49:32.510: INFO: Created: latency-svc-76bzp Jan 4 12:49:32.519: INFO: Got endpoints: latency-svc-76bzp [1.684053809s] Jan 4 12:49:32.631: INFO: Created: latency-svc-trtps Jan 4 12:49:32.660: INFO: Got endpoints: latency-svc-trtps [1.810489377s] Jan 4 12:49:32.699: INFO: Created: latency-svc-lnwq6 Jan 4 12:49:32.715: INFO: Got endpoints: latency-svc-lnwq6 [1.646251205s] Jan 4 12:49:32.830: INFO: Created: latency-svc-g5m6b Jan 4 12:49:32.839: INFO: Got endpoints: latency-svc-g5m6b [1.599571522s] Jan 4 12:49:32.881: INFO: Created: latency-svc-5djhm Jan 4 12:49:32.894: INFO: Got endpoints: latency-svc-5djhm [1.576724709s] Jan 4 12:49:32.980: INFO: Created: latency-svc-2tf45 Jan 4 12:49:32.983: INFO: Got endpoints: latency-svc-2tf45 [1.469180267s] Jan 4 12:49:33.025: INFO: Created: latency-svc-tphhl Jan 4 12:49:33.028: INFO: Got endpoints: latency-svc-tphhl [1.306663326s] Jan 4 12:49:33.126: INFO: Created: latency-svc-xx8q9 Jan 4 12:49:33.135: INFO: Got endpoints: latency-svc-xx8q9 [1.22960742s] Jan 4 12:49:33.387: INFO: Created: latency-svc-7mvfv Jan 4 12:49:33.395: INFO: Got endpoints: latency-svc-7mvfv [1.447420291s] Jan 4 12:49:33.447: INFO: Created: latency-svc-lfh67 Jan 4 12:49:33.556: INFO: Got endpoints: latency-svc-lfh67 [1.564568484s] Jan 4 12:49:33.556: INFO: Created: latency-svc-wxm5t Jan 4 12:49:33.563: INFO: Got endpoints: latency-svc-wxm5t [1.464544773s] Jan 4 12:49:33.609: INFO: Created: latency-svc-sjk9r Jan 4 12:49:33.623: INFO: Got endpoints: latency-svc-sjk9r [1.474012786s] Jan 4 12:49:33.738: INFO: Created: latency-svc-jlzsg Jan 4 12:49:33.758: INFO: Got endpoints: latency-svc-jlzsg [1.513966615s] Jan 4 12:49:33.898: INFO: Created: latency-svc-fb2p9 Jan 4 12:49:33.911: INFO: Got endpoints: latency-svc-fb2p9 [1.580962013s] Jan 4 12:49:33.957: INFO: Created: latency-svc-vmkwn Jan 4 12:49:33.980: INFO: Got endpoints: latency-svc-vmkwn [1.526952776s] Jan 4 12:49:34.056: INFO: Created: latency-svc-ldzvs Jan 4 12:49:34.070: INFO: Got endpoints: latency-svc-ldzvs [1.551340982s] Jan 4 12:49:34.136: INFO: Created: latency-svc-56m5v Jan 4 12:49:34.141: INFO: Got endpoints: latency-svc-56m5v [1.481268499s] Jan 4 12:49:34.264: INFO: Created: latency-svc-djrj7 Jan 4 12:49:34.279: INFO: Got endpoints: latency-svc-djrj7 [1.563741977s] Jan 4 12:49:34.327: INFO: Created: latency-svc-n4l6l Jan 4 12:49:34.338: INFO: Got endpoints: latency-svc-n4l6l [1.498752661s] Jan 4 12:49:34.451: INFO: Created: latency-svc-j5z8w Jan 4 12:49:34.465: INFO: Got endpoints: latency-svc-j5z8w [1.570978283s] Jan 4 12:49:34.508: INFO: Created: latency-svc-tgljw Jan 4 12:49:34.514: INFO: Got endpoints: latency-svc-tgljw [1.531668069s] Jan 4 12:49:34.624: INFO: Created: latency-svc-2ntqm Jan 4 12:49:34.643: INFO: Got endpoints: latency-svc-2ntqm [1.614971282s] Jan 4 12:49:34.682: INFO: Created: latency-svc-29h2g Jan 4 12:49:34.741: INFO: Got endpoints: latency-svc-29h2g [1.606351701s] Jan 4 12:49:34.804: INFO: Created: latency-svc-xf9kj Jan 4 12:49:34.807: INFO: Got endpoints: latency-svc-xf9kj [1.411721844s] Jan 4 12:49:34.908: INFO: Created: latency-svc-6xfnn Jan 4 12:49:34.920: INFO: Got endpoints: latency-svc-6xfnn [1.363517813s] Jan 4 12:49:34.968: INFO: Created: latency-svc-hzqld Jan 4 12:49:34.972: INFO: Got endpoints: latency-svc-hzqld [1.409409852s] Jan 4 12:49:35.073: INFO: Created: latency-svc-gcnnj Jan 4 12:49:35.086: INFO: Got endpoints: latency-svc-gcnnj [1.462652223s] Jan 4 12:49:35.143: INFO: Created: latency-svc-c7n28 Jan 4 12:49:35.161: INFO: Got endpoints: latency-svc-c7n28 [1.403132466s] Jan 4 12:49:35.313: INFO: Created: latency-svc-tbdvl Jan 4 12:49:35.319: INFO: Got endpoints: latency-svc-tbdvl [1.408573873s] Jan 4 12:49:35.382: INFO: Created: latency-svc-b6hdd Jan 4 12:49:35.495: INFO: Got endpoints: latency-svc-b6hdd [1.51411207s] Jan 4 12:49:35.562: INFO: Created: latency-svc-2688w Jan 4 12:49:35.587: INFO: Got endpoints: latency-svc-2688w [1.515973895s] Jan 4 12:49:35.707: INFO: Created: latency-svc-shjzg Jan 4 12:49:35.715: INFO: Got endpoints: latency-svc-shjzg [1.573667964s] Jan 4 12:49:35.883: INFO: Created: latency-svc-t8dmv Jan 4 12:49:35.891: INFO: Got endpoints: latency-svc-t8dmv [1.611879397s] Jan 4 12:49:35.948: INFO: Created: latency-svc-dh87n Jan 4 12:49:35.949: INFO: Got endpoints: latency-svc-dh87n [1.61109186s] Jan 4 12:49:36.041: INFO: Created: latency-svc-hw4kj Jan 4 12:49:36.054: INFO: Got endpoints: latency-svc-hw4kj [1.589085729s] Jan 4 12:49:36.098: INFO: Created: latency-svc-9tdnv Jan 4 12:49:36.136: INFO: Got endpoints: latency-svc-9tdnv [1.621898961s] Jan 4 12:49:36.230: INFO: Created: latency-svc-84hgz Jan 4 12:49:36.279: INFO: Got endpoints: latency-svc-84hgz [1.636257806s] Jan 4 12:49:36.298: INFO: Created: latency-svc-lkbfc Jan 4 12:49:36.449: INFO: Got endpoints: latency-svc-lkbfc [1.708140797s] Jan 4 12:49:36.495: INFO: Created: latency-svc-dd2mh Jan 4 12:49:36.503: INFO: Got endpoints: latency-svc-dd2mh [1.695608256s] Jan 4 12:49:36.681: INFO: Created: latency-svc-hzljr Jan 4 12:49:36.684: INFO: Got endpoints: latency-svc-hzljr [1.764495038s] Jan 4 12:49:36.745: INFO: Created: latency-svc-mvwhl Jan 4 12:49:36.759: INFO: Got endpoints: latency-svc-mvwhl [1.786053704s] Jan 4 12:49:36.906: INFO: Created: latency-svc-jxg5h Jan 4 12:49:36.912: INFO: Got endpoints: latency-svc-jxg5h [1.825476661s] Jan 4 12:49:36.950: INFO: Created: latency-svc-ssgj7 Jan 4 12:49:37.057: INFO: Got endpoints: latency-svc-ssgj7 [1.896462837s] Jan 4 12:49:37.062: INFO: Created: latency-svc-tc69r Jan 4 12:49:37.074: INFO: Got endpoints: latency-svc-tc69r [1.754469865s] Jan 4 12:49:37.126: INFO: Created: latency-svc-lvmbr Jan 4 12:49:37.134: INFO: Got endpoints: latency-svc-lvmbr [1.63950365s] Jan 4 12:49:37.313: INFO: Created: latency-svc-tjl6g Jan 4 12:49:37.323: INFO: Got endpoints: latency-svc-tjl6g [1.736319134s] Jan 4 12:49:37.379: INFO: Created: latency-svc-mb9tf Jan 4 12:49:37.385: INFO: Got endpoints: latency-svc-mb9tf [1.669669994s] Jan 4 12:49:37.505: INFO: Created: latency-svc-k5d5j Jan 4 12:49:37.514: INFO: Got endpoints: latency-svc-k5d5j [1.622460961s] Jan 4 12:49:37.564: INFO: Created: latency-svc-xdlgx Jan 4 12:49:37.564: INFO: Got endpoints: latency-svc-xdlgx [1.615462188s] Jan 4 12:49:37.681: INFO: Created: latency-svc-2rj5x Jan 4 12:49:37.765: INFO: Got endpoints: latency-svc-2rj5x [1.710748638s] Jan 4 12:49:37.778: INFO: Created: latency-svc-4gkd8 Jan 4 12:49:37.978: INFO: Got endpoints: latency-svc-4gkd8 [1.841908206s] Jan 4 12:49:37.990: INFO: Created: latency-svc-c4mvz Jan 4 12:49:38.003: INFO: Got endpoints: latency-svc-c4mvz [1.723985119s] Jan 4 12:49:38.051: INFO: Created: latency-svc-zfvjn Jan 4 12:49:38.055: INFO: Got endpoints: latency-svc-zfvjn [1.605238817s] Jan 4 12:49:38.174: INFO: Created: latency-svc-5qgnp Jan 4 12:49:38.194: INFO: Got endpoints: latency-svc-5qgnp [1.691156969s] Jan 4 12:49:38.269: INFO: Created: latency-svc-98s5p Jan 4 12:49:38.363: INFO: Got endpoints: latency-svc-98s5p [1.679034689s] Jan 4 12:49:38.370: INFO: Created: latency-svc-b6sw7 Jan 4 12:49:38.392: INFO: Got endpoints: latency-svc-b6sw7 [1.633398265s] Jan 4 12:49:38.435: INFO: Created: latency-svc-8hhl5 Jan 4 12:49:38.450: INFO: Got endpoints: latency-svc-8hhl5 [1.537712089s] Jan 4 12:49:38.552: INFO: Created: latency-svc-wxwtf Jan 4 12:49:38.594: INFO: Got endpoints: latency-svc-wxwtf [1.536623629s] Jan 4 12:49:38.606: INFO: Created: latency-svc-c6ff6 Jan 4 12:49:38.616: INFO: Got endpoints: latency-svc-c6ff6 [1.542307923s] Jan 4 12:49:38.745: INFO: Created: latency-svc-9qbbf Jan 4 12:49:38.755: INFO: Got endpoints: latency-svc-9qbbf [1.620475626s] Jan 4 12:49:38.813: INFO: Created: latency-svc-pxk2g Jan 4 12:49:38.827: INFO: Got endpoints: latency-svc-pxk2g [1.503720222s] Jan 4 12:49:38.909: INFO: Created: latency-svc-mmmrp Jan 4 12:49:38.914: INFO: Got endpoints: latency-svc-mmmrp [1.529331212s] Jan 4 12:49:38.959: INFO: Created: latency-svc-k6wxs Jan 4 12:49:38.971: INFO: Got endpoints: latency-svc-k6wxs [1.456021037s] Jan 4 12:49:39.080: INFO: Created: latency-svc-c9hjr Jan 4 12:49:39.092: INFO: Got endpoints: latency-svc-c9hjr [1.527553865s] Jan 4 12:49:39.136: INFO: Created: latency-svc-hkcpm Jan 4 12:49:39.147: INFO: Got endpoints: latency-svc-hkcpm [1.381284822s] Jan 4 12:49:39.255: INFO: Created: latency-svc-m98mr Jan 4 12:49:39.262: INFO: Got endpoints: latency-svc-m98mr [1.283831565s] Jan 4 12:49:39.305: INFO: Created: latency-svc-jr87v Jan 4 12:49:39.316: INFO: Got endpoints: latency-svc-jr87v [1.31212279s] Jan 4 12:49:39.464: INFO: Created: latency-svc-89m7v Jan 4 12:49:39.511: INFO: Got endpoints: latency-svc-89m7v [1.456350403s] Jan 4 12:49:39.519: INFO: Created: latency-svc-kczvd Jan 4 12:49:39.529: INFO: Got endpoints: latency-svc-kczvd [1.334107303s] Jan 4 12:49:39.683: INFO: Created: latency-svc-bsdf2 Jan 4 12:49:39.705: INFO: Got endpoints: latency-svc-bsdf2 [1.341375386s] Jan 4 12:49:39.769: INFO: Created: latency-svc-xsf9t Jan 4 12:49:39.876: INFO: Got endpoints: latency-svc-xsf9t [1.484055257s] Jan 4 12:49:39.894: INFO: Created: latency-svc-qqqmn Jan 4 12:49:39.905: INFO: Got endpoints: latency-svc-qqqmn [1.455678259s] Jan 4 12:49:39.962: INFO: Created: latency-svc-qlzm5 Jan 4 12:49:39.972: INFO: Got endpoints: latency-svc-qlzm5 [1.377534444s] Jan 4 12:49:40.074: INFO: Created: latency-svc-zwwdv Jan 4 12:49:40.109: INFO: Got endpoints: latency-svc-zwwdv [1.492950507s] Jan 4 12:49:40.174: INFO: Created: latency-svc-wxzrn Jan 4 12:49:40.263: INFO: Got endpoints: latency-svc-wxzrn [1.507644832s] Jan 4 12:49:40.273: INFO: Created: latency-svc-llmb2 Jan 4 12:49:40.285: INFO: Got endpoints: latency-svc-llmb2 [1.457868202s] Jan 4 12:49:40.331: INFO: Created: latency-svc-lkrf8 Jan 4 12:49:40.347: INFO: Got endpoints: latency-svc-lkrf8 [1.432792258s] Jan 4 12:49:40.451: INFO: Created: latency-svc-n7khr Jan 4 12:49:40.453: INFO: Got endpoints: latency-svc-n7khr [1.482359366s] Jan 4 12:49:40.709: INFO: Created: latency-svc-smzf5 Jan 4 12:49:40.732: INFO: Got endpoints: latency-svc-smzf5 [1.639815003s] Jan 4 12:49:40.904: INFO: Created: latency-svc-722nd Jan 4 12:49:40.914: INFO: Got endpoints: latency-svc-722nd [1.766549274s] Jan 4 12:49:40.963: INFO: Created: latency-svc-zvgwl Jan 4 12:49:40.980: INFO: Got endpoints: latency-svc-zvgwl [1.717161416s] Jan 4 12:49:41.040: INFO: Created: latency-svc-svpp7 Jan 4 12:49:41.054: INFO: Got endpoints: latency-svc-svpp7 [1.738040778s] Jan 4 12:49:41.091: INFO: Created: latency-svc-x4585 Jan 4 12:49:41.098: INFO: Got endpoints: latency-svc-x4585 [1.586693323s] Jan 4 12:49:41.206: INFO: Created: latency-svc-zsbc4 Jan 4 12:49:41.215: INFO: Got endpoints: latency-svc-zsbc4 [1.686522888s] Jan 4 12:49:41.260: INFO: Created: latency-svc-k9dlx Jan 4 12:49:41.271: INFO: Got endpoints: latency-svc-k9dlx [1.566058208s] Jan 4 12:49:41.309: INFO: Created: latency-svc-bv4vr Jan 4 12:49:41.427: INFO: Got endpoints: latency-svc-bv4vr [1.550452846s] Jan 4 12:49:41.457: INFO: Created: latency-svc-c7shp Jan 4 12:49:41.490: INFO: Got endpoints: latency-svc-c7shp [1.583964575s] Jan 4 12:49:41.493: INFO: Created: latency-svc-hv965 Jan 4 12:49:41.499: INFO: Got endpoints: latency-svc-hv965 [1.526720438s] Jan 4 12:49:41.578: INFO: Created: latency-svc-mck6j Jan 4 12:49:41.600: INFO: Got endpoints: latency-svc-mck6j [1.490651439s] Jan 4 12:49:41.680: INFO: Created: latency-svc-d5kls Jan 4 12:49:41.687: INFO: Got endpoints: latency-svc-d5kls [1.424016078s] Jan 4 12:49:41.832: INFO: Created: latency-svc-5j2wl Jan 4 12:49:41.844: INFO: Got endpoints: latency-svc-5j2wl [1.55858003s] Jan 4 12:49:41.889: INFO: Created: latency-svc-cpwnq Jan 4 12:49:41.900: INFO: Got endpoints: latency-svc-cpwnq [1.552485194s] Jan 4 12:49:42.001: INFO: Created: latency-svc-gwwwn Jan 4 12:49:42.007: INFO: Got endpoints: latency-svc-gwwwn [1.553681765s] Jan 4 12:49:42.037: INFO: Created: latency-svc-bsmzw Jan 4 12:49:42.042: INFO: Got endpoints: latency-svc-bsmzw [1.31014887s] Jan 4 12:49:42.083: INFO: Created: latency-svc-clnk5 Jan 4 12:49:42.182: INFO: Got endpoints: latency-svc-clnk5 [1.268483551s] Jan 4 12:49:42.202: INFO: Created: latency-svc-jrbtx Jan 4 12:49:42.209: INFO: Got endpoints: latency-svc-jrbtx [1.22881804s] Jan 4 12:49:42.273: INFO: Created: latency-svc-h7vbz Jan 4 12:49:42.273: INFO: Got endpoints: latency-svc-h7vbz [1.219175558s] Jan 4 12:49:42.377: INFO: Created: latency-svc-2khtj Jan 4 12:49:42.378: INFO: Got endpoints: latency-svc-2khtj [1.279682949s] Jan 4 12:49:42.430: INFO: Created: latency-svc-smv8z Jan 4 12:49:42.486: INFO: Got endpoints: latency-svc-smv8z [1.270194522s] Jan 4 12:49:42.512: INFO: Created: latency-svc-xjhw7 Jan 4 12:49:42.560: INFO: Created: latency-svc-n578z Jan 4 12:49:42.578: INFO: Got endpoints: latency-svc-xjhw7 [1.306257729s] Jan 4 12:49:42.627: INFO: Got endpoints: latency-svc-n578z [1.199564367s] Jan 4 12:49:42.637: INFO: Created: latency-svc-wlrzv Jan 4 12:49:42.655: INFO: Got endpoints: latency-svc-wlrzv [1.164812897s] Jan 4 12:49:42.700: INFO: Created: latency-svc-zkkkp Jan 4 12:49:42.708: INFO: Got endpoints: latency-svc-zkkkp [1.209168616s] Jan 4 12:49:42.831: INFO: Created: latency-svc-wxj72 Jan 4 12:49:42.866: INFO: Got endpoints: latency-svc-wxj72 [1.266006191s] Jan 4 12:49:42.872: INFO: Created: latency-svc-9hrst Jan 4 12:49:42.979: INFO: Got endpoints: latency-svc-9hrst [1.291512522s] Jan 4 12:49:42.993: INFO: Created: latency-svc-wm8g4 Jan 4 12:49:43.002: INFO: Got endpoints: latency-svc-wm8g4 [1.157542781s] Jan 4 12:49:43.042: INFO: Created: latency-svc-mh6xb Jan 4 12:49:43.048: INFO: Got endpoints: latency-svc-mh6xb [1.147998111s] Jan 4 12:49:43.138: INFO: Created: latency-svc-67nqn Jan 4 12:49:43.152: INFO: Got endpoints: latency-svc-67nqn [1.144845967s] Jan 4 12:49:43.203: INFO: Created: latency-svc-6hj8l Jan 4 12:49:43.302: INFO: Got endpoints: latency-svc-6hj8l [1.260182676s] Jan 4 12:49:43.326: INFO: Created: latency-svc-47gtg Jan 4 12:49:43.329: INFO: Got endpoints: latency-svc-47gtg [1.146790427s] Jan 4 12:49:43.376: INFO: Created: latency-svc-pm4jw Jan 4 12:49:43.383: INFO: Got endpoints: latency-svc-pm4jw [1.174342104s] Jan 4 12:49:43.493: INFO: Created: latency-svc-b8rmk Jan 4 12:49:43.501: INFO: Got endpoints: latency-svc-b8rmk [1.227455573s] Jan 4 12:49:43.564: INFO: Created: latency-svc-pq8rp Jan 4 12:49:43.575: INFO: Got endpoints: latency-svc-pq8rp [1.196557396s] Jan 4 12:49:43.678: INFO: Created: latency-svc-4z9wk Jan 4 12:49:43.693: INFO: Got endpoints: latency-svc-4z9wk [1.206837685s] Jan 4 12:49:43.754: INFO: Created: latency-svc-jl2d8 Jan 4 12:49:43.769: INFO: Got endpoints: latency-svc-jl2d8 [1.19074401s] Jan 4 12:49:43.954: INFO: Created: latency-svc-4zb9w Jan 4 12:49:43.965: INFO: Got endpoints: latency-svc-4zb9w [1.337640104s] Jan 4 12:49:44.177: INFO: Created: latency-svc-xhqc5 Jan 4 12:49:44.211: INFO: Got endpoints: latency-svc-xhqc5 [1.555917322s] Jan 4 12:49:44.212: INFO: Created: latency-svc-4k9l7 Jan 4 12:49:44.227: INFO: Got endpoints: latency-svc-4k9l7 [1.518837169s] Jan 4 12:49:44.330: INFO: Created: latency-svc-n4s45 Jan 4 12:49:44.365: INFO: Got endpoints: latency-svc-n4s45 [1.49806841s] Jan 4 12:49:44.394: INFO: Created: latency-svc-vpfz7 Jan 4 12:49:44.413: INFO: Got endpoints: latency-svc-vpfz7 [1.434098687s] Jan 4 12:49:44.477: INFO: Created: latency-svc-nhdgs Jan 4 12:49:44.483: INFO: Got endpoints: latency-svc-nhdgs [1.481198901s] Jan 4 12:49:44.539: INFO: Created: latency-svc-cpqmb Jan 4 12:49:44.550: INFO: Got endpoints: latency-svc-cpqmb [1.501211401s] Jan 4 12:49:44.612: INFO: Created: latency-svc-5nbx5 Jan 4 12:49:44.615: INFO: Got endpoints: latency-svc-5nbx5 [1.462605622s] Jan 4 12:49:44.658: INFO: Created: latency-svc-r5vf4 Jan 4 12:49:44.680: INFO: Got endpoints: latency-svc-r5vf4 [1.377091686s] Jan 4 12:49:44.822: INFO: Created: latency-svc-jgzvc Jan 4 12:49:44.831: INFO: Got endpoints: latency-svc-jgzvc [1.501332318s] Jan 4 12:49:44.894: INFO: Created: latency-svc-25nnc Jan 4 12:49:44.904: INFO: Got endpoints: latency-svc-25nnc [1.520192876s] Jan 4 12:49:45.012: INFO: Created: latency-svc-d9qm5 Jan 4 12:49:45.019: INFO: Got endpoints: latency-svc-d9qm5 [1.518245191s] Jan 4 12:49:45.148: INFO: Created: latency-svc-fxnf2 Jan 4 12:49:45.158: INFO: Got endpoints: latency-svc-fxnf2 [1.583113152s] Jan 4 12:49:45.226: INFO: Created: latency-svc-mzv9s Jan 4 12:49:45.301: INFO: Got endpoints: latency-svc-mzv9s [1.608702589s] Jan 4 12:49:45.329: INFO: Created: latency-svc-tpc2g Jan 4 12:49:45.378: INFO: Got endpoints: latency-svc-tpc2g [1.609221715s] Jan 4 12:49:45.381: INFO: Created: latency-svc-tlrrw Jan 4 12:49:45.386: INFO: Got endpoints: latency-svc-tlrrw [1.421063264s] Jan 4 12:49:45.545: INFO: Created: latency-svc-xq5rn Jan 4 12:49:45.553: INFO: Got endpoints: latency-svc-xq5rn [1.342099734s] Jan 4 12:49:45.632: INFO: Created: latency-svc-49hxd Jan 4 12:49:45.715: INFO: Got endpoints: latency-svc-49hxd [1.488374764s] Jan 4 12:49:45.745: INFO: Created: latency-svc-hv8w5 Jan 4 12:49:45.757: INFO: Got endpoints: latency-svc-hv8w5 [1.392069328s] Jan 4 12:49:45.903: INFO: Created: latency-svc-rs8jm Jan 4 12:49:45.917: INFO: Got endpoints: latency-svc-rs8jm [1.503731828s] Jan 4 12:49:45.959: INFO: Created: latency-svc-ngdzc Jan 4 12:49:45.987: INFO: Got endpoints: latency-svc-ngdzc [1.504193721s] Jan 4 12:49:46.054: INFO: Created: latency-svc-rg6q6 Jan 4 12:49:46.060: INFO: Got endpoints: latency-svc-rg6q6 [1.509784676s] Jan 4 12:49:46.108: INFO: Created: latency-svc-tpqjp Jan 4 12:49:46.108: INFO: Got endpoints: latency-svc-tpqjp [1.49351958s] Jan 4 12:49:46.144: INFO: Created: latency-svc-9ltd8 Jan 4 12:49:46.271: INFO: Got endpoints: latency-svc-9ltd8 [1.591023709s] Jan 4 12:49:46.287: INFO: Created: latency-svc-xprxl Jan 4 12:49:46.304: INFO: Got endpoints: latency-svc-xprxl [1.473664734s] Jan 4 12:49:46.375: INFO: Created: latency-svc-jb94v Jan 4 12:49:46.443: INFO: Got endpoints: latency-svc-jb94v [1.539173764s] Jan 4 12:49:46.480: INFO: Created: latency-svc-d995h Jan 4 12:49:46.491: INFO: Got endpoints: latency-svc-d995h [1.471496656s] Jan 4 12:49:46.530: INFO: Created: latency-svc-sqqpk Jan 4 12:49:46.608: INFO: Got endpoints: latency-svc-sqqpk [1.450459367s] Jan 4 12:49:46.622: INFO: Created: latency-svc-vg8sv Jan 4 12:49:46.668: INFO: Created: latency-svc-86jjh Jan 4 12:49:46.669: INFO: Got endpoints: latency-svc-vg8sv [1.366668706s] Jan 4 12:49:46.680: INFO: Got endpoints: latency-svc-86jjh [1.301412192s] Jan 4 12:49:46.848: INFO: Created: latency-svc-9rgwc Jan 4 12:49:46.886: INFO: Got endpoints: latency-svc-9rgwc [1.499361451s] Jan 4 12:49:46.915: INFO: Created: latency-svc-4hgk6 Jan 4 12:49:46.921: INFO: Got endpoints: latency-svc-4hgk6 [1.367327619s] Jan 4 12:49:47.001: INFO: Created: latency-svc-vzwnx Jan 4 12:49:47.006: INFO: Got endpoints: latency-svc-vzwnx [1.29093158s] Jan 4 12:49:47.006: INFO: Latencies: [131.942811ms 161.537984ms 287.96078ms 312.52771ms 385.839151ms 489.115827ms 666.798599ms 675.205202ms 720.274788ms 755.521458ms 922.219829ms 938.847067ms 987.06901ms 1.051038944s 1.083933962s 1.102760094s 1.126120522s 1.144845967s 1.146790427s 1.147998111s 1.157542781s 1.161498665s 1.164812897s 1.166819075s 1.169328656s 1.174342104s 1.19074401s 1.196557396s 1.197145478s 1.199564367s 1.206837685s 1.209168616s 1.219175558s 1.227455573s 1.22881804s 1.22960742s 1.260182676s 1.266006191s 1.268483551s 1.270194522s 1.279682949s 1.28080801s 1.283831565s 1.29093158s 1.291512522s 1.301412192s 1.306257729s 1.306663326s 1.31014887s 1.31212279s 1.334107303s 1.337640104s 1.339157735s 1.341375386s 1.342099734s 1.348984142s 1.35301809s 1.363517813s 1.366668706s 1.367327619s 1.377091686s 1.377534444s 1.381284822s 1.392069328s 1.403132466s 1.404120312s 1.408573873s 1.409409852s 1.411721844s 1.421063264s 1.424016078s 1.431933389s 1.432792258s 1.434098687s 1.447420291s 1.448069548s 1.450459367s 1.452647651s 1.455678259s 1.456021037s 1.456350403s 1.457868202s 1.462605622s 1.462652223s 1.464544773s 1.465531516s 1.469180267s 1.471496656s 1.473664734s 1.474012786s 1.481198901s 1.481268499s 1.482359366s 1.484055257s 1.488374764s 1.490651439s 1.492950507s 1.49351958s 1.49806841s 1.498752661s 1.499361451s 1.501211401s 1.501332318s 1.503070535s 1.503720222s 1.503731828s 1.504193721s 1.507644832s 1.509784676s 1.513966615s 1.51411207s 1.515973895s 1.518245191s 1.518837169s 1.520192876s 1.526720438s 1.526952776s 1.527553865s 1.529331212s 1.531668069s 1.536623629s 1.537712089s 1.539173764s 1.542307923s 1.550452846s 1.551340982s 1.552485194s 1.553681765s 1.5557689s 1.555917322s 1.55858003s 1.563741977s 1.564568484s 1.566058208s 1.566261939s 1.568213744s 1.569237811s 1.570978283s 1.572897307s 1.573667964s 1.576724709s 1.57748265s 1.580962013s 1.581284448s 1.583113152s 1.583964575s 1.585611728s 1.585717698s 1.586693323s 1.589085729s 1.591023709s 1.599571522s 1.605238817s 1.606351701s 1.608702589s 1.609221715s 1.61109186s 1.611879397s 1.614971282s 1.615462188s 1.620475626s 1.621898961s 1.622127112s 1.622460961s 1.633398265s 1.636257806s 1.63950365s 1.639815003s 1.646251205s 1.665517249s 1.669669994s 1.674836075s 1.679034689s 1.684053809s 1.686522888s 1.691156969s 1.695608256s 1.708140797s 1.710748638s 1.71539025s 1.717161416s 1.723985119s 1.726267592s 1.736319134s 1.738040778s 1.74591014s 1.746540199s 1.754469865s 1.755860976s 1.760364755s 1.764495038s 1.766549274s 1.771372876s 1.775473728s 1.786053704s 1.810489377s 1.825476661s 1.841908206s 1.852968109s 1.896462837s] Jan 4 12:49:47.007: INFO: 50 %ile: 1.499361451s Jan 4 12:49:47.007: INFO: 90 %ile: 1.717161416s Jan 4 12:49:47.007: INFO: 99 %ile: 1.852968109s Jan 4 12:49:47.007: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:49:47.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4227" for this suite. Jan 4 12:50:29.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:50:29.171: INFO: namespace svc-latency-4227 deletion completed in 42.154520055s • [SLOW TEST:72.285 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:50:29.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:51:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4727" for this suite. Jan 4 12:51:27.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:51:27.683: INFO: namespace container-runtime-4727 deletion completed in 6.155067556s • [SLOW TEST:58.512 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:51:27.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 12:51:41.929: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.943: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.954: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.971: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.978: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.985: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.989: INFO: Unable to read jessie_udp@PodARecord from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.994: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91: the server could not find the requested resource (get pods dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91) Jan 4 12:51:41.994: INFO: Lookups using dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 4 12:51:47.064: INFO: DNS probes using dns-2220/dns-test-76c9a424-d07c-4db1-9fb6-d712cadcaa91 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:51:47.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2220" for this suite. Jan 4 12:51:53.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:51:53.407: INFO: namespace dns-2220 deletion completed in 6.209447858s • [SLOW TEST:25.722 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:51:53.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 4 12:51:53.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-922' Jan 4 12:51:56.048: INFO: stderr: "" Jan 4 12:51:56.048: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 12:51:56.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:51:56.189: INFO: stderr: "" Jan 4 12:51:56.189: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " Jan 4 12:51:56.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:51:56.363: INFO: stderr: "" Jan 4 12:51:56.363: INFO: stdout: "" Jan 4 12:51:56.363: INFO: update-demo-nautilus-8nxd7 is created but not running Jan 4 12:52:01.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:01.508: INFO: stderr: "" Jan 4 12:52:01.508: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " Jan 4 12:52:01.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:01.585: INFO: stderr: "" Jan 4 12:52:01.585: INFO: stdout: "" Jan 4 12:52:01.585: INFO: update-demo-nautilus-8nxd7 is created but not running Jan 4 12:52:06.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:06.723: INFO: stderr: "" Jan 4 12:52:06.723: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " Jan 4 12:52:06.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:06.830: INFO: stderr: "" Jan 4 12:52:06.830: INFO: stdout: "" Jan 4 12:52:06.830: INFO: update-demo-nautilus-8nxd7 is created but not running Jan 4 12:52:11.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:11.995: INFO: stderr: "" Jan 4 12:52:11.995: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " Jan 4 12:52:11.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:12.122: INFO: stderr: "" Jan 4 12:52:12.122: INFO: stdout: "true" Jan 4 12:52:12.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:12.335: INFO: stderr: "" Jan 4 12:52:12.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:12.335: INFO: validating pod update-demo-nautilus-8nxd7 Jan 4 12:52:12.366: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:12.366: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:12.366: INFO: update-demo-nautilus-8nxd7 is verified up and running Jan 4 12:52:12.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgzdc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:12.448: INFO: stderr: "" Jan 4 12:52:12.448: INFO: stdout: "true" Jan 4 12:52:12.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgzdc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:12.657: INFO: stderr: "" Jan 4 12:52:12.657: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:12.657: INFO: validating pod update-demo-nautilus-pgzdc Jan 4 12:52:12.699: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:12.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:12.700: INFO: update-demo-nautilus-pgzdc is verified up and running STEP: scaling down the replication controller Jan 4 12:52:12.713: INFO: scanned /root for discovery docs: Jan 4 12:52:12.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-922' Jan 4 12:52:13.858: INFO: stderr: "" Jan 4 12:52:13.858: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 12:52:13.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:14.013: INFO: stderr: "" Jan 4 12:52:14.013: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 12:52:19.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:19.170: INFO: stderr: "" Jan 4 12:52:19.170: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 12:52:24.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:24.322: INFO: stderr: "" Jan 4 12:52:24.322: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-pgzdc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 12:52:29.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:29.483: INFO: stderr: "" Jan 4 12:52:29.483: INFO: stdout: "update-demo-nautilus-8nxd7 " Jan 4 12:52:29.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:29.636: INFO: stderr: "" Jan 4 12:52:29.637: INFO: stdout: "true" Jan 4 12:52:29.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:29.740: INFO: stderr: "" Jan 4 12:52:29.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:29.740: INFO: validating pod update-demo-nautilus-8nxd7 Jan 4 12:52:29.745: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:29.745: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:29.745: INFO: update-demo-nautilus-8nxd7 is verified up and running STEP: scaling up the replication controller Jan 4 12:52:29.747: INFO: scanned /root for discovery docs: Jan 4 12:52:29.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-922' Jan 4 12:52:30.892: INFO: stderr: "" Jan 4 12:52:30.892: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 12:52:30.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:31.050: INFO: stderr: "" Jan 4 12:52:31.051: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-x9z9b " Jan 4 12:52:31.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:31.128: INFO: stderr: "" Jan 4 12:52:31.128: INFO: stdout: "true" Jan 4 12:52:31.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:31.275: INFO: stderr: "" Jan 4 12:52:31.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:31.275: INFO: validating pod update-demo-nautilus-8nxd7 Jan 4 12:52:31.278: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:31.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:31.278: INFO: update-demo-nautilus-8nxd7 is verified up and running Jan 4 12:52:31.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9z9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:31.371: INFO: stderr: "" Jan 4 12:52:31.371: INFO: stdout: "" Jan 4 12:52:31.371: INFO: update-demo-nautilus-x9z9b is created but not running Jan 4 12:52:36.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:36.499: INFO: stderr: "" Jan 4 12:52:36.500: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-x9z9b " Jan 4 12:52:36.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:36.582: INFO: stderr: "" Jan 4 12:52:36.582: INFO: stdout: "true" Jan 4 12:52:36.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:36.654: INFO: stderr: "" Jan 4 12:52:36.654: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:36.654: INFO: validating pod update-demo-nautilus-8nxd7 Jan 4 12:52:36.659: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:36.659: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:36.659: INFO: update-demo-nautilus-8nxd7 is verified up and running Jan 4 12:52:36.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9z9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:36.775: INFO: stderr: "" Jan 4 12:52:36.775: INFO: stdout: "" Jan 4 12:52:36.775: INFO: update-demo-nautilus-x9z9b is created but not running Jan 4 12:52:41.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-922' Jan 4 12:52:41.933: INFO: stderr: "" Jan 4 12:52:41.933: INFO: stdout: "update-demo-nautilus-8nxd7 update-demo-nautilus-x9z9b " Jan 4 12:52:41.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:42.068: INFO: stderr: "" Jan 4 12:52:42.068: INFO: stdout: "true" Jan 4 12:52:42.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:42.181: INFO: stderr: "" Jan 4 12:52:42.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:42.181: INFO: validating pod update-demo-nautilus-8nxd7 Jan 4 12:52:42.194: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:42.194: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:42.194: INFO: update-demo-nautilus-8nxd7 is verified up and running Jan 4 12:52:42.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9z9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:42.386: INFO: stderr: "" Jan 4 12:52:42.386: INFO: stdout: "true" Jan 4 12:52:42.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9z9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-922' Jan 4 12:52:42.479: INFO: stderr: "" Jan 4 12:52:42.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 12:52:42.479: INFO: validating pod update-demo-nautilus-x9z9b Jan 4 12:52:42.502: INFO: got data: { "image": "nautilus.jpg" } Jan 4 12:52:42.502: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 12:52:42.502: INFO: update-demo-nautilus-x9z9b is verified up and running STEP: using delete to clean up resources Jan 4 12:52:42.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-922' Jan 4 12:52:42.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:52:42.594: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 4 12:52:42.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-922' Jan 4 12:52:42.700: INFO: stderr: "No resources found.\n" Jan 4 12:52:42.700: INFO: stdout: "" Jan 4 12:52:42.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-922 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 12:52:42.786: INFO: stderr: "" Jan 4 12:52:42.786: INFO: stdout: "update-demo-nautilus-8nxd7\nupdate-demo-nautilus-x9z9b\n" Jan 4 12:52:43.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-922' Jan 4 12:52:43.369: INFO: stderr: "No resources found.\n" Jan 4 12:52:43.369: INFO: stdout: "" Jan 4 12:52:43.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-922 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 12:52:43.470: INFO: stderr: "" Jan 4 12:52:43.470: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:52:43.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-922" for this suite. Jan 4 12:53:06.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:53:06.258: INFO: namespace kubectl-922 deletion completed in 22.443324352s • [SLOW TEST:72.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:53:06.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 4 12:53:06.400: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix828204121/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:53:06.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-278" for this suite. Jan 4 12:53:12.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:53:12.725: INFO: namespace kubectl-278 deletion completed in 6.156813784s • [SLOW TEST:6.466 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:53:12.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:53:22.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2685" for this suite. Jan 4 12:54:08.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:54:09.113: INFO: namespace kubelet-test-2685 deletion completed in 46.151803181s • [SLOW TEST:56.388 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:54:09.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:54:09.217: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 4 12:54:15.200: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 12:54:19.216: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 12:54:29.296: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-9847,SelfLink:/apis/apps/v1/namespaces/deployment-9847/deployments/test-cleanup-deployment,UID:93d70810-c8d2-4f56-b0d4-7ea94fc8b8c0,ResourceVersion:19264630,Generation:1,CreationTimestamp:2020-01-04 12:54:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 12:54:19 +0000 UTC 2020-01-04 12:54:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 12:54:27 +0000 UTC 2020-01-04 12:54:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 4 12:54:29.301: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-9847,SelfLink:/apis/apps/v1/namespaces/deployment-9847/replicasets/test-cleanup-deployment-55bbcbc84c,UID:6fa65f9d-891e-4a38-b743-5fe0b8cc52f1,ResourceVersion:19264618,Generation:1,CreationTimestamp:2020-01-04 12:54:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 93d70810-c8d2-4f56-b0d4-7ea94fc8b8c0 0xc002efa1c7 0xc002efa1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 4 12:54:29.311: INFO: Pod "test-cleanup-deployment-55bbcbc84c-q568n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-q568n,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-9847,SelfLink:/api/v1/namespaces/deployment-9847/pods/test-cleanup-deployment-55bbcbc84c-q568n,UID:6f35671f-7c3c-4f3a-97f0-8f2463bdbc1e,ResourceVersion:19264617,Generation:0,CreationTimestamp:2020-01-04 12:54:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 6fa65f9d-891e-4a38-b743-5fe0b8cc52f1 0xc0018208d7 0xc0018208d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v7g4d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v7g4d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v7g4d true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001820950} {node.kubernetes.io/unreachable Exists NoExecute 0xc001820970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:54:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:54:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:54:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:54:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 12:54:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 12:54:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://5e9f56e571cdb6788b54f18d3de16c7fb5a878da77cacc436b30e9692a3643a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:54:29.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9847" for this suite. Jan 4 12:54:35.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:54:35.471: INFO: namespace deployment-9847 deletion completed in 6.153681751s • [SLOW TEST:26.358 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:54:35.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:54:35.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2621" for this suite. Jan 4 12:54:41.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:54:41.812: INFO: namespace services-2621 deletion completed in 6.169632306s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.341 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:54:41.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7fsm STEP: Creating a pod to test atomic-volume-subpath Jan 4 12:54:41.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7fsm" in namespace "subpath-1723" to be "success or failure" Jan 4 12:54:41.943: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633819ms Jan 4 12:54:43.950: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010551481s Jan 4 12:54:45.955: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015896257s Jan 4 12:54:47.964: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024423267s Jan 4 12:54:49.971: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 8.03142955s Jan 4 12:54:51.981: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 10.041725454s Jan 4 12:54:53.986: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 12.046822959s Jan 4 12:54:55.995: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 14.055877887s Jan 4 12:54:58.003: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 16.063895497s Jan 4 12:55:00.011: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 18.07163316s Jan 4 12:55:02.019: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 20.079677592s Jan 4 12:55:04.026: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 22.087281465s Jan 4 12:55:06.034: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 24.094611134s Jan 4 12:55:08.041: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 26.102216266s Jan 4 12:55:10.059: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Running", Reason="", readiness=true. Elapsed: 28.119983863s Jan 4 12:55:12.071: INFO: Pod "pod-subpath-test-configmap-7fsm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.132204347s STEP: Saw pod success Jan 4 12:55:12.071: INFO: Pod "pod-subpath-test-configmap-7fsm" satisfied condition "success or failure" Jan 4 12:55:12.078: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-7fsm container test-container-subpath-configmap-7fsm: STEP: delete the pod Jan 4 12:55:12.381: INFO: Waiting for pod pod-subpath-test-configmap-7fsm to disappear Jan 4 12:55:12.410: INFO: Pod pod-subpath-test-configmap-7fsm no longer exists STEP: Deleting pod pod-subpath-test-configmap-7fsm Jan 4 12:55:12.410: INFO: Deleting pod "pod-subpath-test-configmap-7fsm" in namespace "subpath-1723" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:55:12.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1723" for this suite. Jan 4 12:55:18.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:55:18.571: INFO: namespace subpath-1723 deletion completed in 6.151535972s • [SLOW TEST:36.758 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:55:18.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 4 12:55:34.766: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:34.796: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:36.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:36.808: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:38.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:38.807: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:40.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:40.804: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:42.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:42.806: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:44.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:44.808: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:46.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:46.812: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:48.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:48.804: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:50.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:50.814: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:52.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:52.805: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:54.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:54.801: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:55:56.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:55:56.801: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:55:56.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1076" for this suite. Jan 4 12:56:19.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:56:19.162: INFO: namespace container-lifecycle-hook-1076 deletion completed in 22.20874189s • [SLOW TEST:60.591 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:56:19.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9606 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 12:56:19.212: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 12:57:03.898: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9606 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:57:03.898: INFO: >>> kubeConfig: /root/.kube/config I0104 12:57:03.960997 8 log.go:172] (0xc000d626e0) (0xc001931c20) Create stream I0104 12:57:03.961032 8 log.go:172] (0xc000d626e0) (0xc001931c20) Stream added, broadcasting: 1 I0104 12:57:03.965551 8 log.go:172] (0xc000d626e0) Reply frame received for 1 I0104 12:57:03.965589 8 log.go:172] (0xc000d626e0) (0xc000112820) Create stream I0104 12:57:03.965608 8 log.go:172] (0xc000d626e0) (0xc000112820) Stream added, broadcasting: 3 I0104 12:57:03.966538 8 log.go:172] (0xc000d626e0) Reply frame received for 3 I0104 12:57:03.966610 8 log.go:172] (0xc000d626e0) (0xc001931d60) Create stream I0104 12:57:03.966623 8 log.go:172] (0xc000d626e0) (0xc001931d60) Stream added, broadcasting: 5 I0104 12:57:03.968106 8 log.go:172] (0xc000d626e0) Reply frame received for 5 I0104 12:57:04.174453 8 log.go:172] (0xc000d626e0) Data frame received for 3 I0104 12:57:04.174484 8 log.go:172] (0xc000112820) (3) Data frame handling I0104 12:57:04.174499 8 log.go:172] (0xc000112820) (3) Data frame sent I0104 12:57:04.416394 8 log.go:172] (0xc000d626e0) (0xc000112820) Stream removed, broadcasting: 3 I0104 12:57:04.416636 8 log.go:172] (0xc000d626e0) (0xc001931d60) Stream removed, broadcasting: 5 I0104 12:57:04.416676 8 log.go:172] (0xc000d626e0) Data frame received for 1 I0104 12:57:04.416694 8 log.go:172] (0xc001931c20) (1) Data frame handling I0104 12:57:04.416723 8 log.go:172] (0xc001931c20) (1) Data frame sent I0104 12:57:04.416740 8 log.go:172] (0xc000d626e0) (0xc001931c20) Stream removed, broadcasting: 1 I0104 12:57:04.416883 8 log.go:172] (0xc000d626e0) (0xc001931c20) Stream removed, broadcasting: 1 I0104 12:57:04.416897 8 log.go:172] (0xc000d626e0) (0xc000112820) Stream removed, broadcasting: 3 I0104 12:57:04.416905 8 log.go:172] (0xc000d626e0) (0xc001931d60) Stream removed, broadcasting: 5 I0104 12:57:04.417284 8 log.go:172] (0xc000d626e0) Go away received Jan 4 12:57:04.417: INFO: Waiting for endpoints: map[] Jan 4 12:57:04.430: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9606 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:57:04.430: INFO: >>> kubeConfig: /root/.kube/config I0104 12:57:04.613466 8 log.go:172] (0xc000d14840) (0xc000113360) Create stream I0104 12:57:04.613655 8 log.go:172] (0xc000d14840) (0xc000113360) Stream added, broadcasting: 1 I0104 12:57:04.674815 8 log.go:172] (0xc000d14840) Reply frame received for 1 I0104 12:57:04.674982 8 log.go:172] (0xc000d14840) (0xc0021c8140) Create stream I0104 12:57:04.675003 8 log.go:172] (0xc000d14840) (0xc0021c8140) Stream added, broadcasting: 3 I0104 12:57:04.695097 8 log.go:172] (0xc000d14840) Reply frame received for 3 I0104 12:57:04.695247 8 log.go:172] (0xc000d14840) (0xc000113400) Create stream I0104 12:57:04.695268 8 log.go:172] (0xc000d14840) (0xc000113400) Stream added, broadcasting: 5 I0104 12:57:04.706371 8 log.go:172] (0xc000d14840) Reply frame received for 5 I0104 12:57:04.925840 8 log.go:172] (0xc000d14840) Data frame received for 3 I0104 12:57:04.925895 8 log.go:172] (0xc0021c8140) (3) Data frame handling I0104 12:57:04.925912 8 log.go:172] (0xc0021c8140) (3) Data frame sent I0104 12:57:05.114304 8 log.go:172] (0xc000d14840) (0xc0021c8140) Stream removed, broadcasting: 3 I0104 12:57:05.114364 8 log.go:172] (0xc000d14840) Data frame received for 1 I0104 12:57:05.114377 8 log.go:172] (0xc000113360) (1) Data frame handling I0104 12:57:05.114386 8 log.go:172] (0xc000d14840) (0xc000113400) Stream removed, broadcasting: 5 I0104 12:57:05.114401 8 log.go:172] (0xc000113360) (1) Data frame sent I0104 12:57:05.114421 8 log.go:172] (0xc000d14840) (0xc000113360) Stream removed, broadcasting: 1 I0104 12:57:05.114450 8 log.go:172] (0xc000d14840) Go away received I0104 12:57:05.114535 8 log.go:172] (0xc000d14840) (0xc000113360) Stream removed, broadcasting: 1 I0104 12:57:05.114579 8 log.go:172] (0xc000d14840) (0xc0021c8140) Stream removed, broadcasting: 3 I0104 12:57:05.114585 8 log.go:172] (0xc000d14840) (0xc000113400) Stream removed, broadcasting: 5 Jan 4 12:57:05.114: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:57:05.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9606" for this suite. Jan 4 12:57:29.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:57:29.408: INFO: namespace pod-network-test-9606 deletion completed in 24.27831841s • [SLOW TEST:70.246 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:57:29.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 4 12:57:40.111: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6e2d11bc-767e-4d8b-834f-c523fa0c62d4" Jan 4 12:57:40.111: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6e2d11bc-767e-4d8b-834f-c523fa0c62d4" in namespace "pods-50" to be "terminated due to deadline exceeded" Jan 4 12:57:40.123: INFO: Pod "pod-update-activedeadlineseconds-6e2d11bc-767e-4d8b-834f-c523fa0c62d4": Phase="Running", Reason="", readiness=true. Elapsed: 12.122888ms Jan 4 12:57:42.132: INFO: Pod "pod-update-activedeadlineseconds-6e2d11bc-767e-4d8b-834f-c523fa0c62d4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021319379s Jan 4 12:57:42.133: INFO: Pod "pod-update-activedeadlineseconds-6e2d11bc-767e-4d8b-834f-c523fa0c62d4" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:57:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-50" for this suite. Jan 4 12:57:48.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:57:48.338: INFO: namespace pods-50 deletion completed in 6.197796494s • [SLOW TEST:18.929 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:57:48.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 12:57:48.498: INFO: Waiting up to 5m0s for pod "pod-52b2982a-1d53-4129-b877-aab9bef82205" in namespace "emptydir-6796" to be "success or failure" Jan 4 12:57:48.535: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Pending", Reason="", readiness=false. Elapsed: 36.874783ms Jan 4 12:57:50.551: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053373786s Jan 4 12:57:52.562: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064380309s Jan 4 12:57:54.580: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082199588s Jan 4 12:57:56.597: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Running", Reason="", readiness=true. Elapsed: 8.099305714s Jan 4 12:57:58.602: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103649984s STEP: Saw pod success Jan 4 12:57:58.602: INFO: Pod "pod-52b2982a-1d53-4129-b877-aab9bef82205" satisfied condition "success or failure" Jan 4 12:57:58.605: INFO: Trying to get logs from node iruya-node pod pod-52b2982a-1d53-4129-b877-aab9bef82205 container test-container: STEP: delete the pod Jan 4 12:57:58.701: INFO: Waiting for pod pod-52b2982a-1d53-4129-b877-aab9bef82205 to disappear Jan 4 12:57:58.726: INFO: Pod pod-52b2982a-1d53-4129-b877-aab9bef82205 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:57:58.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6796" for this suite. Jan 4 12:58:04.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:58:04.977: INFO: namespace emptydir-6796 deletion completed in 6.237396432s • [SLOW TEST:16.639 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:58:04.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 4 12:58:05.676: INFO: created pod pod-service-account-defaultsa Jan 4 12:58:05.676: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 4 12:58:05.685: INFO: created pod pod-service-account-mountsa Jan 4 12:58:05.685: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 4 12:58:05.843: INFO: created pod pod-service-account-nomountsa Jan 4 12:58:05.843: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 4 12:58:05.873: INFO: created pod pod-service-account-defaultsa-mountspec Jan 4 12:58:05.873: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 4 12:58:05.913: INFO: created pod pod-service-account-mountsa-mountspec Jan 4 12:58:05.913: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 4 12:58:05.950: INFO: created pod pod-service-account-nomountsa-mountspec Jan 4 12:58:05.950: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 4 12:58:06.013: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 4 12:58:06.013: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 4 12:58:06.051: INFO: created pod pod-service-account-mountsa-nomountspec Jan 4 12:58:06.051: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 4 12:58:06.083: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 4 12:58:06.083: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:58:06.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1956" for this suite. Jan 4 12:58:32.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:58:33.003: INFO: namespace svcaccounts-1956 deletion completed in 26.739956596s • [SLOW TEST:28.025 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:58:33.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0ca15723-21d8-47e0-822a-516f26aab31a STEP: Creating a pod to test consume configMaps Jan 4 12:58:33.135: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8" in namespace "projected-5082" to be "success or failure" Jan 4 12:58:33.142: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197307ms Jan 4 12:58:35.150: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014736896s Jan 4 12:58:37.161: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025003437s Jan 4 12:58:39.166: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030915399s Jan 4 12:58:41.176: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040678965s STEP: Saw pod success Jan 4 12:58:41.176: INFO: Pod "pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8" satisfied condition "success or failure" Jan 4 12:58:41.180: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8 container projected-configmap-volume-test: STEP: delete the pod Jan 4 12:58:41.251: INFO: Waiting for pod pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8 to disappear Jan 4 12:58:41.258: INFO: Pod pod-projected-configmaps-62436c2e-825b-4c66-8620-23ef42412ce8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:58:41.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5082" for this suite. Jan 4 12:58:47.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:58:47.590: INFO: namespace projected-5082 deletion completed in 6.296820641s • [SLOW TEST:14.587 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:58:47.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:58:47.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 4 12:58:47.899: INFO: stderr: "" Jan 4 12:58:47.899: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:58:47.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6905" for this suite. Jan 4 12:58:55.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:58:55.512: INFO: namespace kubectl-6905 deletion completed in 7.606295212s • [SLOW TEST:7.922 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:58:55.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7154.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7154.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7154.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.196.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.196.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.196.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.196.135_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7154.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7154.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7154.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7154.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.196.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.196.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.196.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.196.135_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 12:59:10.149: INFO: Unable to read wheezy_udp@dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.158: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.176: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.181: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.186: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.191: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.197: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.204: INFO: Unable to read 10.105.196.135_udp@PTR from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.211: INFO: Unable to read 10.105.196.135_tcp@PTR from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.218: INFO: Unable to read jessie_udp@dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.235: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.244: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.249: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.255: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.258: INFO: Unable to read jessie_udp@PodARecord from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.262: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.266: INFO: Unable to read 10.105.196.135_udp@PTR from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.272: INFO: Unable to read 10.105.196.135_tcp@PTR from pod dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954: the server could not find the requested resource (get pods dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954) Jan 4 12:59:10.272: INFO: Lookups using dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954 failed for: [wheezy_udp@dns-test-service.dns-7154.svc.cluster.local wheezy_tcp@dns-test-service.dns-7154.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.196.135_udp@PTR 10.105.196.135_tcp@PTR jessie_udp@dns-test-service.dns-7154.svc.cluster.local jessie_tcp@dns-test-service.dns-7154.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7154.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7154.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7154.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.196.135_udp@PTR 10.105.196.135_tcp@PTR] Jan 4 12:59:15.477: INFO: DNS probes using dns-7154/dns-test-1f80c4f6-1829-4577-b406-f97bbffa0954 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:59:15.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7154" for this suite. Jan 4 12:59:21.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:59:22.042: INFO: namespace dns-7154 deletion completed in 6.128260351s • [SLOW TEST:26.530 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:59:22.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0104 12:59:32.396770 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 12:59:32.396: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:59:32.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2927" for this suite. Jan 4 12:59:38.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:59:38.787: INFO: namespace gc-2927 deletion completed in 6.386441082s • [SLOW TEST:16.744 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:59:38.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 4 12:59:38.983: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 12:59:52.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1586" for this suite. Jan 4 12:59:58.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:59:58.753: INFO: namespace init-container-1586 deletion completed in 6.210827603s • [SLOW TEST:19.966 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 12:59:58.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 12:59:58.866: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 29.067551ms)
Jan  4 12:59:58.942: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 75.431809ms)
Jan  4 12:59:58.950: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.879473ms)
Jan  4 12:59:58.954: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.940442ms)
Jan  4 12:59:58.957: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.319276ms)
Jan  4 12:59:58.960: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.997ms)
Jan  4 12:59:58.963: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.826752ms)
Jan  4 12:59:58.966: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.877128ms)
Jan  4 12:59:58.970: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.632575ms)
Jan  4 12:59:58.973: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.237599ms)
Jan  4 12:59:58.976: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.027368ms)
Jan  4 12:59:58.979: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.620602ms)
Jan  4 12:59:58.983: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.038273ms)
Jan  4 12:59:58.986: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.32631ms)
Jan  4 12:59:58.992: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.753029ms)
Jan  4 12:59:58.998: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.988458ms)
Jan  4 12:59:59.004: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.494997ms)
Jan  4 12:59:59.011: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.808043ms)
Jan  4 12:59:59.019: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.766747ms)
Jan  4 12:59:59.027: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.262178ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 12:59:59.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7354" for this suite.
Jan  4 13:00:05.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:00:05.203: INFO: namespace proxy-7354 deletion completed in 6.171836021s

• [SLOW TEST:6.450 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:00:05.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-026142d3-8439-47ef-883d-71b2cc8b3931 in namespace container-probe-962
Jan  4 13:00:15.395: INFO: Started pod test-webserver-026142d3-8439-47ef-883d-71b2cc8b3931 in namespace container-probe-962
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 13:00:15.401: INFO: Initial restart count of pod test-webserver-026142d3-8439-47ef-883d-71b2cc8b3931 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:04:17.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-962" for this suite.
Jan  4 13:04:23.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:04:23.170: INFO: namespace container-probe-962 deletion completed in 6.127183396s

• [SLOW TEST:257.967 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:04:23.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:04:23.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9" in namespace "downward-api-7160" to be "success or failure"
Jan  4 13:04:23.383: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 122.790317ms
Jan  4 13:04:25.388: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128188207s
Jan  4 13:04:27.529: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26862862s
Jan  4 13:04:29.537: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27667866s
Jan  4 13:04:31.546: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28593768s
Jan  4 13:04:33.551: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.291089442s
STEP: Saw pod success
Jan  4 13:04:33.551: INFO: Pod "downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9" satisfied condition "success or failure"
Jan  4 13:04:33.554: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9 container client-container: 
STEP: delete the pod
Jan  4 13:04:33.617: INFO: Waiting for pod downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9 to disappear
Jan  4 13:04:33.640: INFO: Pod downwardapi-volume-711853d1-4b5b-48c0-b5de-9aa5a1aeceb9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:04:33.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7160" for this suite.
Jan  4 13:04:39.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:04:40.049: INFO: namespace downward-api-7160 deletion completed in 6.152122196s

• [SLOW TEST:16.878 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:04:40.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 13:04:40.169: INFO: Waiting up to 5m0s for pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb" in namespace "downward-api-6577" to be "success or failure"
Jan  4 13:04:40.181: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.57607ms
Jan  4 13:04:42.192: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022860547s
Jan  4 13:04:44.212: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043208997s
Jan  4 13:04:46.326: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157184015s
Jan  4 13:04:48.333: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Running", Reason="", readiness=true. Elapsed: 8.164348905s
Jan  4 13:04:50.340: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171151151s
STEP: Saw pod success
Jan  4 13:04:50.340: INFO: Pod "downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb" satisfied condition "success or failure"
Jan  4 13:04:50.345: INFO: Trying to get logs from node iruya-node pod downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb container dapi-container: 
STEP: delete the pod
Jan  4 13:04:50.799: INFO: Waiting for pod downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb to disappear
Jan  4 13:04:50.805: INFO: Pod downward-api-af9df2ed-9121-435a-bb7c-607dab9ec2bb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:04:50.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6577" for this suite.
Jan  4 13:04:56.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:04:57.001: INFO: namespace downward-api-6577 deletion completed in 6.188962038s

• [SLOW TEST:16.951 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:04:57.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 13:04:57.171: INFO: Waiting up to 5m0s for pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779" in namespace "downward-api-6400" to be "success or failure"
Jan  4 13:04:57.187: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Pending", Reason="", readiness=false. Elapsed: 16.186824ms
Jan  4 13:04:59.193: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02272737s
Jan  4 13:05:01.209: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038579149s
Jan  4 13:05:03.222: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050835412s
Jan  4 13:05:05.239: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067814488s
Jan  4 13:05:07.249: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0787007s
STEP: Saw pod success
Jan  4 13:05:07.250: INFO: Pod "downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779" satisfied condition "success or failure"
Jan  4 13:05:07.256: INFO: Trying to get logs from node iruya-node pod downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779 container dapi-container: 
STEP: delete the pod
Jan  4 13:05:07.367: INFO: Waiting for pod downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779 to disappear
Jan  4 13:05:07.445: INFO: Pod downward-api-d1a2bad5-86cd-4d79-a4b6-631407442779 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:05:07.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6400" for this suite.
Jan  4 13:05:13.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:05:13.560: INFO: namespace downward-api-6400 deletion completed in 6.108241911s

• [SLOW TEST:16.559 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:05:13.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  4 13:05:23.769: INFO: Pod pod-hostip-ab39f513-d292-4f47-a6f3-bfe676cf0b55 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:05:23.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5704" for this suite.
Jan  4 13:06:01.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:06:01.926: INFO: namespace pods-5704 deletion completed in 38.128665965s

• [SLOW TEST:48.365 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:06:01.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:06:02.417: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"611b61ff-9287-457a-9717-8e68ba99597a", Controller:(*bool)(0xc00304cc3a), BlockOwnerDeletion:(*bool)(0xc00304cc3b)}}
Jan  4 13:06:02.443: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"95b4331a-2358-4326-9733-4a8eaed7cf54", Controller:(*bool)(0xc00304cdea), BlockOwnerDeletion:(*bool)(0xc00304cdeb)}}
Jan  4 13:06:02.475: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"73de9a5a-ff45-4afd-833d-192ddbde8360", Controller:(*bool)(0xc002781b22), BlockOwnerDeletion:(*bool)(0xc002781b23)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:06:07.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4589" for this suite.
Jan  4 13:06:13.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:06:13.781: INFO: namespace gc-4589 deletion completed in 6.199161688s

• [SLOW TEST:11.854 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:06:13.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:06:13.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad" in namespace "downward-api-7152" to be "success or failure"
Jan  4 13:06:13.939: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 5.564129ms
Jan  4 13:06:15.945: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011488649s
Jan  4 13:06:17.953: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020006346s
Jan  4 13:06:19.960: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026777367s
Jan  4 13:06:21.970: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03669914s
Jan  4 13:06:23.976: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042614357s
Jan  4 13:06:25.982: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.049168223s
STEP: Saw pod success
Jan  4 13:06:25.982: INFO: Pod "downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad" satisfied condition "success or failure"
Jan  4 13:06:25.984: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad container client-container: 
STEP: delete the pod
Jan  4 13:06:26.075: INFO: Waiting for pod downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad to disappear
Jan  4 13:06:26.159: INFO: Pod downwardapi-volume-d5597ca9-209d-4ef0-86b5-dd2105bdfdad no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:06:26.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7152" for this suite.
Jan  4 13:06:32.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:06:32.361: INFO: namespace downward-api-7152 deletion completed in 6.19277164s

• [SLOW TEST:18.580 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:06:32.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-5260b236-4ec0-4299-af9f-aa528e3fab4d
STEP: Creating secret with name secret-projected-all-test-volume-216f7eee-5fd3-4a7f-a058-3332429fa775
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  4 13:06:32.625: INFO: Waiting up to 5m0s for pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22" in namespace "projected-1191" to be "success or failure"
Jan  4 13:06:32.631: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 5.200858ms
Jan  4 13:06:34.644: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018467962s
Jan  4 13:06:36.657: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032060693s
Jan  4 13:06:38.667: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041420181s
Jan  4 13:06:40.677: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051820644s
Jan  4 13:06:42.689: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063705742s
Jan  4 13:06:45.131: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.506072181s
STEP: Saw pod success
Jan  4 13:06:45.132: INFO: Pod "projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22" satisfied condition "success or failure"
Jan  4 13:06:45.144: INFO: Trying to get logs from node iruya-node pod projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22 container projected-all-volume-test: 
STEP: delete the pod
Jan  4 13:06:45.310: INFO: Waiting for pod projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22 to disappear
Jan  4 13:06:45.321: INFO: Pod projected-volume-1b3dd11a-49da-417b-88e7-d55fefbd8f22 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:06:45.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1191" for this suite.
Jan  4 13:06:51.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:06:51.477: INFO: namespace projected-1191 deletion completed in 6.152425022s

• [SLOW TEST:19.116 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:06:51.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  4 13:06:51.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1054'
Jan  4 13:06:54.339: INFO: stderr: ""
Jan  4 13:06:54.339: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:06:54.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:06:54.513: INFO: stderr: ""
Jan  4 13:06:54.513: INFO: stdout: "update-demo-nautilus-hvkwq update-demo-nautilus-xfml4 "
Jan  4 13:06:54.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:06:54.662: INFO: stderr: ""
Jan  4 13:06:54.662: INFO: stdout: ""
Jan  4 13:06:54.662: INFO: update-demo-nautilus-hvkwq is created but not running
Jan  4 13:06:59.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:06:59.778: INFO: stderr: ""
Jan  4 13:06:59.778: INFO: stdout: "update-demo-nautilus-hvkwq update-demo-nautilus-xfml4 "
Jan  4 13:06:59.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:06:59.972: INFO: stderr: ""
Jan  4 13:06:59.972: INFO: stdout: ""
Jan  4 13:06:59.972: INFO: update-demo-nautilus-hvkwq is created but not running
Jan  4 13:07:04.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:07:06.076: INFO: stderr: ""
Jan  4 13:07:06.077: INFO: stdout: "update-demo-nautilus-hvkwq update-demo-nautilus-xfml4 "
Jan  4 13:07:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:06.311: INFO: stderr: ""
Jan  4 13:07:06.311: INFO: stdout: ""
Jan  4 13:07:06.311: INFO: update-demo-nautilus-hvkwq is created but not running
Jan  4 13:07:11.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:07:11.462: INFO: stderr: ""
Jan  4 13:07:11.462: INFO: stdout: "update-demo-nautilus-hvkwq update-demo-nautilus-xfml4 "
Jan  4 13:07:11.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:11.551: INFO: stderr: ""
Jan  4 13:07:11.551: INFO: stdout: "true"
Jan  4 13:07:11.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvkwq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:11.627: INFO: stderr: ""
Jan  4 13:07:11.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:07:11.627: INFO: validating pod update-demo-nautilus-hvkwq
Jan  4 13:07:11.633: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:07:11.633: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:07:11.633: INFO: update-demo-nautilus-hvkwq is verified up and running
Jan  4 13:07:11.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xfml4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:11.717: INFO: stderr: ""
Jan  4 13:07:11.717: INFO: stdout: "true"
Jan  4 13:07:11.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xfml4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:11.816: INFO: stderr: ""
Jan  4 13:07:11.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:07:11.816: INFO: validating pod update-demo-nautilus-xfml4
Jan  4 13:07:11.835: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:07:11.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:07:11.836: INFO: update-demo-nautilus-xfml4 is verified up and running
STEP: rolling-update to new replication controller
Jan  4 13:07:11.837: INFO: scanned /root for discovery docs: 
Jan  4 13:07:11.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1054'
Jan  4 13:07:51.378: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  4 13:07:51.378: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:07:51.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:07:51.528: INFO: stderr: ""
Jan  4 13:07:51.529: INFO: stdout: "update-demo-kitten-7v9f5 update-demo-kitten-dvd6n update-demo-nautilus-xfml4 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  4 13:07:56.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1054'
Jan  4 13:07:56.683: INFO: stderr: ""
Jan  4 13:07:56.683: INFO: stdout: "update-demo-kitten-7v9f5 update-demo-kitten-dvd6n "
Jan  4 13:07:56.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7v9f5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:56.774: INFO: stderr: ""
Jan  4 13:07:56.774: INFO: stdout: "true"
Jan  4 13:07:56.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7v9f5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:56.897: INFO: stderr: ""
Jan  4 13:07:56.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 13:07:56.897: INFO: validating pod update-demo-kitten-7v9f5
Jan  4 13:07:56.935: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 13:07:56.935: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 13:07:56.935: INFO: update-demo-kitten-7v9f5 is verified up and running
Jan  4 13:07:56.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dvd6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:57.052: INFO: stderr: ""
Jan  4 13:07:57.052: INFO: stdout: "true"
Jan  4 13:07:57.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dvd6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1054'
Jan  4 13:07:57.155: INFO: stderr: ""
Jan  4 13:07:57.156: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 13:07:57.156: INFO: validating pod update-demo-kitten-dvd6n
Jan  4 13:07:57.179: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 13:07:57.179: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 13:07:57.179: INFO: update-demo-kitten-dvd6n is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:07:57.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1054" for this suite.
Jan  4 13:08:21.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:08:21.277: INFO: namespace kubectl-1054 deletion completed in 24.095342915s

• [SLOW TEST:89.800 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:08:21.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-43f99905-8319-4bb4-b25c-5a180185e360
STEP: Creating a pod to test consume secrets
Jan  4 13:08:21.484: INFO: Waiting up to 5m0s for pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111" in namespace "secrets-7266" to be "success or failure"
Jan  4 13:08:21.532: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 47.667107ms
Jan  4 13:08:23.545: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061213646s
Jan  4 13:08:25.556: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071922969s
Jan  4 13:08:27.568: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084235896s
Jan  4 13:08:29.622: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137596244s
Jan  4 13:08:31.629: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144624281s
Jan  4 13:08:33.648: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Pending", Reason="", readiness=false. Elapsed: 12.163594281s
Jan  4 13:08:35.655: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.170594864s
STEP: Saw pod success
Jan  4 13:08:35.655: INFO: Pod "pod-secrets-b39918b8-fa77-4656-af51-caa85932c111" satisfied condition "success or failure"
Jan  4 13:08:35.660: INFO: Trying to get logs from node iruya-node pod pod-secrets-b39918b8-fa77-4656-af51-caa85932c111 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:08:36.025: INFO: Waiting for pod pod-secrets-b39918b8-fa77-4656-af51-caa85932c111 to disappear
Jan  4 13:08:36.036: INFO: Pod pod-secrets-b39918b8-fa77-4656-af51-caa85932c111 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:08:36.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7266" for this suite.
Jan  4 13:08:42.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:08:42.175: INFO: namespace secrets-7266 deletion completed in 6.133217186s

• [SLOW TEST:20.898 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:08:42.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:08:42.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4" in namespace "projected-5394" to be "success or failure"
Jan  4 13:08:42.439: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.640514ms
Jan  4 13:08:44.493: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061782408s
Jan  4 13:08:46.547: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115866514s
Jan  4 13:08:48.559: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127249133s
Jan  4 13:08:50.571: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139854739s
Jan  4 13:08:52.585: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153114633s
STEP: Saw pod success
Jan  4 13:08:52.585: INFO: Pod "downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4" satisfied condition "success or failure"
Jan  4 13:08:52.592: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4 container client-container: 
STEP: delete the pod
Jan  4 13:08:52.650: INFO: Waiting for pod downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4 to disappear
Jan  4 13:08:52.653: INFO: Pod downwardapi-volume-6a57ede5-45f0-40de-8362-d4d67af1d1a4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:08:52.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5394" for this suite.
Jan  4 13:08:58.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:08:58.750: INFO: namespace projected-5394 deletion completed in 6.092244959s

• [SLOW TEST:16.574 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:08:58.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  4 13:08:58.922: INFO: Waiting up to 5m0s for pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d" in namespace "containers-8765" to be "success or failure"
Jan  4 13:08:58.930: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087099ms
Jan  4 13:09:00.935: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012927003s
Jan  4 13:09:02.945: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022890814s
Jan  4 13:09:04.956: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03397958s
Jan  4 13:09:06.964: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041568613s
Jan  4 13:09:08.974: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.051668622s
Jan  4 13:09:12.822: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.900149029s
STEP: Saw pod success
Jan  4 13:09:12.822: INFO: Pod "client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d" satisfied condition "success or failure"
Jan  4 13:09:12.857: INFO: Trying to get logs from node iruya-node pod client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d container test-container: 
STEP: delete the pod
Jan  4 13:09:13.325: INFO: Waiting for pod client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d to disappear
Jan  4 13:09:13.344: INFO: Pod client-containers-d959c7e6-e9f0-4713-a9cf-c878a970a03d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:09:13.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8765" for this suite.
Jan  4 13:09:19.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:09:19.593: INFO: namespace containers-8765 deletion completed in 6.244236782s

• [SLOW TEST:20.843 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:09:19.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  4 13:09:19.768: INFO: Waiting up to 5m0s for pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7" in namespace "emptydir-7429" to be "success or failure"
Jan  4 13:09:19.795: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.815967ms
Jan  4 13:09:21.806: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038558171s
Jan  4 13:09:23.889: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120868456s
Jan  4 13:09:25.899: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131205629s
Jan  4 13:09:28.026: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258387854s
STEP: Saw pod success
Jan  4 13:09:28.026: INFO: Pod "pod-c96da40f-7921-40ad-a48c-dc8de707e0e7" satisfied condition "success or failure"
Jan  4 13:09:28.034: INFO: Trying to get logs from node iruya-node pod pod-c96da40f-7921-40ad-a48c-dc8de707e0e7 container test-container: 
STEP: delete the pod
Jan  4 13:09:28.290: INFO: Waiting for pod pod-c96da40f-7921-40ad-a48c-dc8de707e0e7 to disappear
Jan  4 13:09:28.297: INFO: Pod pod-c96da40f-7921-40ad-a48c-dc8de707e0e7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:09:28.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7429" for this suite.
Jan  4 13:09:34.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:09:34.448: INFO: namespace emptydir-7429 deletion completed in 6.143656799s

• [SLOW TEST:14.855 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:09:34.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:09:34.526: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc" in namespace "projected-772" to be "success or failure"
Jan  4 13:09:34.534: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24342ms
Jan  4 13:09:36.552: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025527874s
Jan  4 13:09:38.561: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034856407s
Jan  4 13:09:40.567: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040627175s
Jan  4 13:09:42.574: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047831477s
STEP: Saw pod success
Jan  4 13:09:42.574: INFO: Pod "downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc" satisfied condition "success or failure"
Jan  4 13:09:42.577: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc container client-container: 
STEP: delete the pod
Jan  4 13:09:42.639: INFO: Waiting for pod downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc to disappear
Jan  4 13:09:42.649: INFO: Pod downwardapi-volume-22c933ac-9aea-45bd-a3ae-ded71d859bcc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:09:42.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-772" for this suite.
Jan  4 13:09:48.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:09:48.843: INFO: namespace projected-772 deletion completed in 6.188254792s

• [SLOW TEST:14.394 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:09:48.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c7dc54ae-8ff0-4487-b151-c7427932f711
STEP: Creating configMap with name cm-test-opt-upd-5ae88220-7bd3-4f84-bf80-3064c05bd466
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c7dc54ae-8ff0-4487-b151-c7427932f711
STEP: Updating configmap cm-test-opt-upd-5ae88220-7bd3-4f84-bf80-3064c05bd466
STEP: Creating configMap with name cm-test-opt-create-0d67bb4b-9186-4ee2-a357-ef22ea5ea561
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:10:05.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2410" for this suite.
Jan  4 13:10:27.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:10:27.477: INFO: namespace projected-2410 deletion completed in 22.189339722s

• [SLOW TEST:38.634 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:10:27.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 13:10:27.646: INFO: Number of nodes with available pods: 0
Jan  4 13:10:27.646: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:29.241: INFO: Number of nodes with available pods: 0
Jan  4 13:10:29.241: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:29.659: INFO: Number of nodes with available pods: 0
Jan  4 13:10:29.659: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:30.948: INFO: Number of nodes with available pods: 0
Jan  4 13:10:30.948: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:31.663: INFO: Number of nodes with available pods: 0
Jan  4 13:10:31.663: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:32.679: INFO: Number of nodes with available pods: 0
Jan  4 13:10:32.679: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:35.250: INFO: Number of nodes with available pods: 0
Jan  4 13:10:35.250: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:36.480: INFO: Number of nodes with available pods: 0
Jan  4 13:10:36.480: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:36.657: INFO: Number of nodes with available pods: 0
Jan  4 13:10:36.657: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:37.860: INFO: Number of nodes with available pods: 0
Jan  4 13:10:37.860: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:38.703: INFO: Number of nodes with available pods: 0
Jan  4 13:10:38.703: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:39.657: INFO: Number of nodes with available pods: 1
Jan  4 13:10:39.657: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:10:40.660: INFO: Number of nodes with available pods: 2
Jan  4 13:10:40.660: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  4 13:10:40.701: INFO: Number of nodes with available pods: 1
Jan  4 13:10:40.701: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:41.721: INFO: Number of nodes with available pods: 1
Jan  4 13:10:41.721: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:42.721: INFO: Number of nodes with available pods: 1
Jan  4 13:10:42.721: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:43.718: INFO: Number of nodes with available pods: 1
Jan  4 13:10:43.718: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:44.709: INFO: Number of nodes with available pods: 1
Jan  4 13:10:44.709: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:45.709: INFO: Number of nodes with available pods: 1
Jan  4 13:10:45.709: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:46.737: INFO: Number of nodes with available pods: 1
Jan  4 13:10:46.737: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:47.722: INFO: Number of nodes with available pods: 1
Jan  4 13:10:47.722: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:49.020: INFO: Number of nodes with available pods: 1
Jan  4 13:10:49.020: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:49.714: INFO: Number of nodes with available pods: 1
Jan  4 13:10:49.714: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:50.744: INFO: Number of nodes with available pods: 1
Jan  4 13:10:50.745: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:52.219: INFO: Number of nodes with available pods: 1
Jan  4 13:10:52.219: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:52.718: INFO: Number of nodes with available pods: 1
Jan  4 13:10:52.718: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:53.984: INFO: Number of nodes with available pods: 1
Jan  4 13:10:53.984: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:54.724: INFO: Number of nodes with available pods: 1
Jan  4 13:10:54.724: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:55.717: INFO: Number of nodes with available pods: 1
Jan  4 13:10:55.717: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:56.723: INFO: Number of nodes with available pods: 1
Jan  4 13:10:56.723: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 13:10:57.714: INFO: Number of nodes with available pods: 2
Jan  4 13:10:57.714: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4712, will wait for the garbage collector to delete the pods
Jan  4 13:10:58.265: INFO: Deleting DaemonSet.extensions daemon-set took: 494.052863ms
Jan  4 13:10:58.966: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.341201ms
Jan  4 13:11:16.679: INFO: Number of nodes with available pods: 0
Jan  4 13:11:16.679: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 13:11:16.686: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4712/daemonsets","resourceVersion":"19266970"},"items":null}

Jan  4 13:11:16.690: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4712/pods","resourceVersion":"19266970"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:11:16.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4712" for this suite.
Jan  4 13:11:22.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:11:22.935: INFO: namespace daemonsets-4712 deletion completed in 6.228367103s

• [SLOW TEST:55.458 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:11:22.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  4 13:11:22.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4963'
Jan  4 13:11:23.338: INFO: stderr: ""
Jan  4 13:11:23.338: INFO: stdout: "pod/pause created\n"
Jan  4 13:11:23.338: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  4 13:11:23.338: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4963" to be "running and ready"
Jan  4 13:11:23.445: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 107.251675ms
Jan  4 13:11:25.453: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115777676s
Jan  4 13:11:27.476: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137893213s
Jan  4 13:11:29.497: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159529048s
Jan  4 13:11:31.504: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166295324s
Jan  4 13:11:33.575: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236983945s
Jan  4 13:11:35.594: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.256166686s
Jan  4 13:11:35.594: INFO: Pod "pause" satisfied condition "running and ready"
Jan  4 13:11:35.594: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  4 13:11:35.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4963'
Jan  4 13:11:35.766: INFO: stderr: ""
Jan  4 13:11:35.766: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  4 13:11:35.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4963'
Jan  4 13:11:35.971: INFO: stderr: ""
Jan  4 13:11:35.971: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  4 13:11:35.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4963'
Jan  4 13:11:36.135: INFO: stderr: ""
Jan  4 13:11:36.135: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  4 13:11:36.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4963'
Jan  4 13:11:36.234: INFO: stderr: ""
Jan  4 13:11:36.234: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  4 13:11:36.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4963'
Jan  4 13:11:36.382: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 13:11:36.382: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  4 13:11:36.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4963'
Jan  4 13:11:36.518: INFO: stderr: "No resources found.\n"
Jan  4 13:11:36.518: INFO: stdout: ""
Jan  4 13:11:36.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4963 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 13:11:36.595: INFO: stderr: ""
Jan  4 13:11:36.595: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:11:36.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4963" for this suite.
Jan  4 13:11:42.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:11:42.839: INFO: namespace kubectl-4963 deletion completed in 6.239019719s

• [SLOW TEST:19.904 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:11:42.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  4 13:11:42.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  4 13:11:43.082: INFO: stderr: ""
Jan  4 13:11:43.082: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:11:43.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3649" for this suite.
Jan  4 13:11:49.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:11:49.174: INFO: namespace kubectl-3649 deletion completed in 6.086810828s

• [SLOW TEST:6.335 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:11:49.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:11:49.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732" in namespace "downward-api-6138" to be "success or failure"
Jan  4 13:11:49.367: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Pending", Reason="", readiness=false. Elapsed: 120.49503ms
Jan  4 13:11:51.377: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130516346s
Jan  4 13:11:53.382: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135753157s
Jan  4 13:11:55.389: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142775927s
Jan  4 13:11:57.395: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149294526s
Jan  4 13:11:59.402: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156267853s
STEP: Saw pod success
Jan  4 13:11:59.402: INFO: Pod "downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732" satisfied condition "success or failure"
Jan  4 13:11:59.407: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732 container client-container: 
STEP: delete the pod
Jan  4 13:11:59.640: INFO: Waiting for pod downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732 to disappear
Jan  4 13:11:59.648: INFO: Pod downwardapi-volume-7bf226d0-c7bd-4ffe-a7d0-f2a0f0ab4732 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:11:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6138" for this suite.
Jan  4 13:12:07.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:12:07.758: INFO: namespace downward-api-6138 deletion completed in 8.107087945s

• [SLOW TEST:18.584 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:12:07.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-c538495d-d02d-4d0a-ad23-bf5475352a01
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c538495d-d02d-4d0a-ad23-bf5475352a01
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:13:42.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8178" for this suite.
Jan  4 13:14:04.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:14:04.432: INFO: namespace configmap-8178 deletion completed in 22.323970317s

• [SLOW TEST:116.674 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:14:04.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:14:04.575: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.352551ms)
Jan  4 13:14:04.584: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.492362ms)
Jan  4 13:14:04.590: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.28182ms)
Jan  4 13:14:04.665: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 75.591853ms)
Jan  4 13:14:04.672: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.884996ms)
Jan  4 13:14:04.677: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.878386ms)
Jan  4 13:14:04.683: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.250805ms)
Jan  4 13:14:04.689: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.841994ms)
Jan  4 13:14:04.696: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.137278ms)
Jan  4 13:14:04.704: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.764716ms)
Jan  4 13:14:04.712: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.974361ms)
Jan  4 13:14:04.718: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.924223ms)
Jan  4 13:14:04.724: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.371842ms)
Jan  4 13:14:04.729: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.610606ms)
Jan  4 13:14:04.734: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.295818ms)
Jan  4 13:14:04.738: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.536889ms)
Jan  4 13:14:04.745: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.377461ms)
Jan  4 13:14:04.757: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.491149ms)
Jan  4 13:14:04.761: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.3573ms)
Jan  4 13:14:04.836: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 75.157788ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:14:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7856" for this suite.
Jan  4 13:14:10.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:14:11.014: INFO: namespace proxy-7856 deletion completed in 6.173542939s

• [SLOW TEST:6.581 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:14:11.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ad1d5cf4-857b-41e9-91c4-68877b627e9c
STEP: Creating a pod to test consume secrets
Jan  4 13:14:11.154: INFO: Waiting up to 5m0s for pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff" in namespace "secrets-1701" to be "success or failure"
Jan  4 13:14:11.169: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Pending", Reason="", readiness=false. Elapsed: 14.765602ms
Jan  4 13:14:13.184: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030634717s
Jan  4 13:14:15.239: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085234654s
Jan  4 13:14:17.244: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090428041s
Jan  4 13:14:19.256: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101730047s
Jan  4 13:14:21.262: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108382009s
STEP: Saw pod success
Jan  4 13:14:21.262: INFO: Pod "pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff" satisfied condition "success or failure"
Jan  4 13:14:21.267: INFO: Trying to get logs from node iruya-node pod pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff container secret-volume-test: 
STEP: delete the pod
Jan  4 13:14:21.360: INFO: Waiting for pod pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff to disappear
Jan  4 13:14:21.367: INFO: Pod pod-secrets-e5c390c5-086c-48d3-b00d-f3a1017868ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:14:21.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1701" for this suite.
Jan  4 13:14:27.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:14:27.556: INFO: namespace secrets-1701 deletion completed in 6.176051186s

• [SLOW TEST:16.542 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:14:27.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  4 13:14:27.687: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  4 13:14:33.097: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:14:33.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4602" for this suite.
Jan  4 13:14:39.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:14:39.509: INFO: namespace replication-controller-4602 deletion completed in 6.206766832s

• [SLOW TEST:11.952 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:14:39.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 13:14:39.805: INFO: PodSpec: initContainers in spec.initContainers
Jan  4 13:15:49.069: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5029ea5d-eb8c-41ad-8a88-0db52527ffda", GenerateName:"", Namespace:"init-container-3348", SelfLink:"/api/v1/namespaces/init-container-3348/pods/pod-init-5029ea5d-eb8c-41ad-8a88-0db52527ffda", UID:"32170a6c-b183-4c43-91ab-417014aeec4b", ResourceVersion:"19267539", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713740479, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"805836981"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pbbvz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002dd69c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pbbvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pbbvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pbbvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026db078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0033c7ec0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026db100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026db120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026db128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026db12c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740481, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740481, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740481, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740479, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001aab9e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa6850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa68c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://8a36c603c694e630787c680ae236c6cf6d7b6c92af0d762149c9759f2ca40f57"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001aaba20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001aaba00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:15:49.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3348" for this suite.
Jan  4 13:16:11.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:16:11.374: INFO: namespace init-container-3348 deletion completed in 22.199977617s

• [SLOW TEST:91.865 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:16:11.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 13:16:22.167: INFO: Successfully updated pod "annotationupdatee72c2545-78ca-4c53-bc7b-1357bd77fade"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:16:24.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6384" for this suite.
Jan  4 13:17:02.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:17:02.440: INFO: namespace downward-api-6384 deletion completed in 38.155109813s

• [SLOW TEST:51.065 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:17:02.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  4 13:17:02.503: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 13:17:02.519: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 13:17:02.521: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  4 13:17:02.553: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.553: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 13:17:02.553: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  4 13:17:02.553: INFO: 	Container weave ready: true, restart count 0
Jan  4 13:17:02.553: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 13:17:02.553: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  4 13:17:02.604: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container kube-controller-manager ready: true, restart count 17
Jan  4 13:17:02.604: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 13:17:02.604: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  4 13:17:02.604: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  4 13:17:02.604: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container coredns ready: true, restart count 0
Jan  4 13:17:02.604: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container coredns ready: true, restart count 0
Jan  4 13:17:02.604: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container etcd ready: true, restart count 0
Jan  4 13:17:02.604: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  4 13:17:02.604: INFO: 	Container weave ready: true, restart count 0
Jan  4 13:17:02.604: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5730d8a1-9756-470f-ac86-5cc5e64ec2d0 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-5730d8a1-9756-470f-ac86-5cc5e64ec2d0 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5730d8a1-9756-470f-ac86-5cc5e64ec2d0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:17:24.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-25" for this suite.
Jan  4 13:17:38.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:17:39.088: INFO: namespace sched-pred-25 deletion completed in 14.187227253s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:36.648 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:17:39.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  4 13:17:50.247: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7724 pod-service-account-ea42fb1c-6ddf-4a8d-9e89-a5f891597994 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  4 13:17:52.777: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7724 pod-service-account-ea42fb1c-6ddf-4a8d-9e89-a5f891597994 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  4 13:17:53.201: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7724 pod-service-account-ea42fb1c-6ddf-4a8d-9e89-a5f891597994 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:17:53.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7724" for this suite.
Jan  4 13:17:59.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:17:59.799: INFO: namespace svcaccounts-7724 deletion completed in 6.208014893s

• [SLOW TEST:20.711 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:17:59.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-d15960e0-14b4-41d4-930d-55b02319b02e
STEP: Creating a pod to test consume secrets
Jan  4 13:17:59.978: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158" in namespace "projected-8261" to be "success or failure"
Jan  4 13:17:59.987: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 9.302855ms
Jan  4 13:18:02.004: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026627012s
Jan  4 13:18:04.014: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036041118s
Jan  4 13:18:06.028: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049990927s
Jan  4 13:18:08.043: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065178024s
Jan  4 13:18:10.057: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07926024s
Jan  4 13:18:12.066: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088794751s
STEP: Saw pod success
Jan  4 13:18:12.067: INFO: Pod "pod-projected-secrets-59606c04-4f10-40de-8399-650028757158" satisfied condition "success or failure"
Jan  4 13:18:12.071: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-59606c04-4f10-40de-8399-650028757158 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:18:12.242: INFO: Waiting for pod pod-projected-secrets-59606c04-4f10-40de-8399-650028757158 to disappear
Jan  4 13:18:12.247: INFO: Pod pod-projected-secrets-59606c04-4f10-40de-8399-650028757158 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:18:12.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8261" for this suite.
Jan  4 13:18:18.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:18:18.410: INFO: namespace projected-8261 deletion completed in 6.155420835s

• [SLOW TEST:18.610 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:18:18.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:18:18.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:18:29.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3475" for this suite.
Jan  4 13:19:21.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:19:21.186: INFO: namespace pods-3475 deletion completed in 52.150368598s

• [SLOW TEST:62.775 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:19:21.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:19:31.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2189" for this suite.
Jan  4 13:20:23.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:20:23.558: INFO: namespace kubelet-test-2189 deletion completed in 52.180365091s

• [SLOW TEST:62.372 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:20:23.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5824, will wait for the garbage collector to delete the pods
Jan  4 13:20:35.720: INFO: Deleting Job.batch foo took: 7.913102ms
Jan  4 13:20:36.020: INFO: Terminating Job.batch foo pods took: 300.422208ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:21:16.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5824" for this suite.
Jan  4 13:21:22.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:21:22.799: INFO: namespace job-5824 deletion completed in 6.163810182s

• [SLOW TEST:59.240 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:21:22.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:21:22.975: INFO: Create a RollingUpdate DaemonSet
Jan  4 13:21:22.982: INFO: Check that daemon pods launch on every node of the cluster
Jan  4 13:21:23.064: INFO: Number of nodes with available pods: 0
Jan  4 13:21:23.064: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:25.322: INFO: Number of nodes with available pods: 0
Jan  4 13:21:25.322: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:26.128: INFO: Number of nodes with available pods: 0
Jan  4 13:21:26.128: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:27.081: INFO: Number of nodes with available pods: 0
Jan  4 13:21:27.081: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:28.083: INFO: Number of nodes with available pods: 0
Jan  4 13:21:28.083: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:30.276: INFO: Number of nodes with available pods: 0
Jan  4 13:21:30.276: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:31.208: INFO: Number of nodes with available pods: 0
Jan  4 13:21:31.208: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:32.078: INFO: Number of nodes with available pods: 0
Jan  4 13:21:32.078: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:33.077: INFO: Number of nodes with available pods: 0
Jan  4 13:21:33.077: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:34.081: INFO: Number of nodes with available pods: 0
Jan  4 13:21:34.081: INFO: Node iruya-node is running more than one daemon pod
Jan  4 13:21:35.077: INFO: Number of nodes with available pods: 2
Jan  4 13:21:35.078: INFO: Number of running nodes: 2, number of available pods: 2
Jan  4 13:21:35.078: INFO: Update the DaemonSet to trigger a rollout
Jan  4 13:21:35.094: INFO: Updating DaemonSet daemon-set
Jan  4 13:21:48.123: INFO: Roll back the DaemonSet before rollout is complete
Jan  4 13:21:48.135: INFO: Updating DaemonSet daemon-set
Jan  4 13:21:48.135: INFO: Make sure DaemonSet rollback is complete
Jan  4 13:21:48.143: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:48.143: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:49.512: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:49.512: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:50.208: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:50.208: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:51.209: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:51.209: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:52.207: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:52.207: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:53.209: INFO: Wrong image for pod: daemon-set-qxrph. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 13:21:53.209: INFO: Pod daemon-set-qxrph is not available
Jan  4 13:21:54.208: INFO: Pod daemon-set-txm4w is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6854, will wait for the garbage collector to delete the pods
Jan  4 13:21:56.489: INFO: Deleting DaemonSet.extensions daemon-set took: 25.369046ms
Jan  4 13:21:57.389: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.345639ms
Jan  4 13:22:06.598: INFO: Number of nodes with available pods: 0
Jan  4 13:22:06.598: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 13:22:06.603: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6854/daemonsets","resourceVersion":"19268344"},"items":null}

Jan  4 13:22:06.606: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6854/pods","resourceVersion":"19268344"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:22:06.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6854" for this suite.
Jan  4 13:22:12.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:22:12.763: INFO: namespace daemonsets-6854 deletion completed in 6.133809874s

• [SLOW TEST:49.963 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:22:12.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  4 13:22:12.950: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:22:27.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2108" for this suite.
Jan  4 13:22:33.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:22:33.646: INFO: namespace pods-2108 deletion completed in 6.223133136s

• [SLOW TEST:20.883 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:22:33.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  4 13:22:47.387: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3c7c6997-a041-42c6-900d-4ddedb028fb0,GenerateName:,Namespace:events-4472,SelfLink:/api/v1/namespaces/events-4472/pods/send-events-3c7c6997-a041-42c6-900d-4ddedb028fb0,UID:62abaf11-1e0e-4330-baee-b3cf80e01faf,ResourceVersion:19268459,Generation:0,CreationTimestamp:2020-01-04 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 786430754,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9hp2q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hp2q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9hp2q true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014ebbd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014ebbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:22:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:22:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:22:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:22:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-04 13:22:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-04 13:22:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://58d180e46158e2c3585e809452806c2b396d68924dfbf146f31a15af797cf20e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  4 13:22:49.400: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  4 13:22:51.414: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:22:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4472" for this suite.
Jan  4 13:23:29.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:23:29.600: INFO: namespace events-4472 deletion completed in 38.132918914s

• [SLOW TEST:55.953 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:23:29.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d22611ae-1700-465f-ac87-10659eade514
STEP: Creating a pod to test consume configMaps
Jan  4 13:23:29.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55" in namespace "configmap-644" to be "success or failure"
Jan  4 13:23:29.813: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 130.189981ms
Jan  4 13:23:31.825: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14230794s
Jan  4 13:23:33.835: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152008172s
Jan  4 13:23:35.842: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159012609s
Jan  4 13:23:37.851: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167904765s
Jan  4 13:23:39.873: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.190282461s
Jan  4 13:23:41.884: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.201408286s
STEP: Saw pod success
Jan  4 13:23:41.884: INFO: Pod "pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55" satisfied condition "success or failure"
Jan  4 13:23:41.888: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55 container configmap-volume-test: 
STEP: delete the pod
Jan  4 13:23:42.128: INFO: Waiting for pod pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55 to disappear
Jan  4 13:23:42.200: INFO: Pod pod-configmaps-ffac2945-7f80-4571-871d-680785de6b55 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:23:42.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-644" for this suite.
Jan  4 13:23:48.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:23:48.517: INFO: namespace configmap-644 deletion completed in 6.305779757s

• [SLOW TEST:18.917 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:23:48.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:24:23.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1271" for this suite.
Jan  4 13:24:29.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:24:29.352: INFO: namespace namespaces-1271 deletion completed in 6.202892898s
STEP: Destroying namespace "nsdeletetest-5658" for this suite.
Jan  4 13:24:29.355: INFO: Namespace nsdeletetest-5658 was already deleted
STEP: Destroying namespace "nsdeletetest-3987" for this suite.
Jan  4 13:24:35.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:24:35.454: INFO: namespace nsdeletetest-3987 deletion completed in 6.099138713s

• [SLOW TEST:46.936 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:24:35.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  4 13:24:35.692: INFO: Waiting up to 5m0s for pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d" in namespace "emptydir-5743" to be "success or failure"
Jan  4 13:24:35.839: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 147.159351ms
Jan  4 13:24:37.869: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177054125s
Jan  4 13:24:39.880: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187868331s
Jan  4 13:24:41.894: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202358186s
Jan  4 13:24:43.903: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211723123s
Jan  4 13:24:45.925: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.233335662s
Jan  4 13:24:47.935: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Running", Reason="", readiness=true. Elapsed: 12.243193464s
Jan  4 13:24:49.946: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.25435992s
STEP: Saw pod success
Jan  4 13:24:49.946: INFO: Pod "pod-753e46e7-07c0-4581-bac8-771274c3ed0d" satisfied condition "success or failure"
Jan  4 13:24:49.955: INFO: Trying to get logs from node iruya-node pod pod-753e46e7-07c0-4581-bac8-771274c3ed0d container test-container: 
STEP: delete the pod
Jan  4 13:24:50.007: INFO: Waiting for pod pod-753e46e7-07c0-4581-bac8-771274c3ed0d to disappear
Jan  4 13:24:50.013: INFO: Pod pod-753e46e7-07c0-4581-bac8-771274c3ed0d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:24:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5743" for this suite.
Jan  4 13:24:56.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:24:56.241: INFO: namespace emptydir-5743 deletion completed in 6.220358021s

• [SLOW TEST:20.786 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:24:56.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  4 13:24:56.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-203'
Jan  4 13:24:57.095: INFO: stderr: ""
Jan  4 13:24:57.095: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:24:57.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-203'
Jan  4 13:24:57.272: INFO: stderr: ""
Jan  4 13:24:57.272: INFO: stdout: "update-demo-nautilus-lrmhh update-demo-nautilus-wl8vj "
Jan  4 13:24:57.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrmhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:24:57.432: INFO: stderr: ""
Jan  4 13:24:57.432: INFO: stdout: ""
Jan  4 13:24:57.432: INFO: update-demo-nautilus-lrmhh is created but not running
Jan  4 13:25:02.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-203'
Jan  4 13:25:02.556: INFO: stderr: ""
Jan  4 13:25:02.556: INFO: stdout: "update-demo-nautilus-lrmhh update-demo-nautilus-wl8vj "
Jan  4 13:25:02.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrmhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:02.670: INFO: stderr: ""
Jan  4 13:25:02.670: INFO: stdout: ""
Jan  4 13:25:02.670: INFO: update-demo-nautilus-lrmhh is created but not running
Jan  4 13:25:07.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-203'
Jan  4 13:25:07.771: INFO: stderr: ""
Jan  4 13:25:07.771: INFO: stdout: "update-demo-nautilus-lrmhh update-demo-nautilus-wl8vj "
Jan  4 13:25:07.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrmhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:08.331: INFO: stderr: ""
Jan  4 13:25:08.331: INFO: stdout: ""
Jan  4 13:25:08.331: INFO: update-demo-nautilus-lrmhh is created but not running
Jan  4 13:25:13.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-203'
Jan  4 13:25:13.439: INFO: stderr: ""
Jan  4 13:25:13.439: INFO: stdout: "update-demo-nautilus-lrmhh update-demo-nautilus-wl8vj "
Jan  4 13:25:13.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrmhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:13.527: INFO: stderr: ""
Jan  4 13:25:13.527: INFO: stdout: "true"
Jan  4 13:25:13.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrmhh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:13.605: INFO: stderr: ""
Jan  4 13:25:13.605: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:25:13.605: INFO: validating pod update-demo-nautilus-lrmhh
Jan  4 13:25:13.612: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:25:13.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:25:13.613: INFO: update-demo-nautilus-lrmhh is verified up and running
Jan  4 13:25:13.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl8vj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:13.684: INFO: stderr: ""
Jan  4 13:25:13.684: INFO: stdout: "true"
Jan  4 13:25:13.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl8vj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-203'
Jan  4 13:25:13.757: INFO: stderr: ""
Jan  4 13:25:13.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:25:13.757: INFO: validating pod update-demo-nautilus-wl8vj
Jan  4 13:25:13.796: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:25:13.796: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:25:13.796: INFO: update-demo-nautilus-wl8vj is verified up and running
STEP: using delete to clean up resources
Jan  4 13:25:13.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-203'
Jan  4 13:25:13.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 13:25:13.920: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  4 13:25:13.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-203'
Jan  4 13:25:14.134: INFO: stderr: "No resources found.\n"
Jan  4 13:25:14.135: INFO: stdout: ""
Jan  4 13:25:14.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-203 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 13:25:14.301: INFO: stderr: ""
Jan  4 13:25:14.301: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:25:14.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-203" for this suite.
Jan  4 13:25:38.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:25:38.516: INFO: namespace kubectl-203 deletion completed in 24.178553998s

• [SLOW TEST:42.275 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:25:38.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f960eb29-600b-4681-98ab-2825e6ddf47e
STEP: Creating a pod to test consume secrets
Jan  4 13:25:38.708: INFO: Waiting up to 5m0s for pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8" in namespace "secrets-9366" to be "success or failure"
Jan  4 13:25:38.723: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.758885ms
Jan  4 13:25:40.739: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030364187s
Jan  4 13:25:42.825: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116283934s
Jan  4 13:25:44.829: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120624734s
Jan  4 13:25:46.839: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130595416s
Jan  4 13:25:49.565: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.856279844s
STEP: Saw pod success
Jan  4 13:25:49.565: INFO: Pod "pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8" satisfied condition "success or failure"
Jan  4 13:25:49.571: INFO: Trying to get logs from node iruya-node pod pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8 container secret-env-test: 
STEP: delete the pod
Jan  4 13:25:49.800: INFO: Waiting for pod pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8 to disappear
Jan  4 13:25:49.806: INFO: Pod pod-secrets-6aa4081c-639d-42a3-bddf-8730b86b3af8 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:25:49.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9366" for this suite.
Jan  4 13:25:55.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:25:56.051: INFO: namespace secrets-9366 deletion completed in 6.237992249s

• [SLOW TEST:17.534 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:25:56.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:26:22.198: INFO: Container started at 2020-01-04 13:26:05 +0000 UTC, pod became ready at 2020-01-04 13:26:20 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:26:22.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6269" for this suite.
Jan  4 13:27:02.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:27:02.400: INFO: namespace container-probe-6269 deletion completed in 40.194162916s

• [SLOW TEST:66.348 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:27:02.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-511f1bbf-ddd8-4e56-89d3-5ffacf3bc58e
STEP: Creating a pod to test consume secrets
Jan  4 13:27:02.667: INFO: Waiting up to 5m0s for pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199" in namespace "secrets-3949" to be "success or failure"
Jan  4 13:27:02.675: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 7.26392ms
Jan  4 13:27:04.685: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017406465s
Jan  4 13:27:06.700: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032748482s
Jan  4 13:27:08.713: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045590812s
Jan  4 13:27:10.724: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056559654s
Jan  4 13:27:12.736: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068452063s
Jan  4 13:27:14.748: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080338105s
STEP: Saw pod success
Jan  4 13:27:14.748: INFO: Pod "pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199" satisfied condition "success or failure"
Jan  4 13:27:14.751: INFO: Trying to get logs from node iruya-node pod pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:27:14.798: INFO: Waiting for pod pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199 to disappear
Jan  4 13:27:14.802: INFO: Pod pod-secrets-5a589cb7-1f87-4978-ad38-8cd7be0b0199 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:27:14.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3949" for this suite.
Jan  4 13:27:21.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:27:21.219: INFO: namespace secrets-3949 deletion completed in 6.410768788s

• [SLOW TEST:18.819 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:27:21.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6690
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 13:27:21.402: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 13:28:03.720: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6690 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 13:28:03.720: INFO: >>> kubeConfig: /root/.kube/config
I0104 13:28:03.797001       8 log.go:172] (0xc0012c2630) (0xc000241a40) Create stream
I0104 13:28:03.797195       8 log.go:172] (0xc0012c2630) (0xc000241a40) Stream added, broadcasting: 1
I0104 13:28:03.807019       8 log.go:172] (0xc0012c2630) Reply frame received for 1
I0104 13:28:03.807090       8 log.go:172] (0xc0012c2630) (0xc001727a40) Create stream
I0104 13:28:03.807100       8 log.go:172] (0xc0012c2630) (0xc001727a40) Stream added, broadcasting: 3
I0104 13:28:03.812345       8 log.go:172] (0xc0012c2630) Reply frame received for 3
I0104 13:28:03.812370       8 log.go:172] (0xc0012c2630) (0xc001727b80) Create stream
I0104 13:28:03.812380       8 log.go:172] (0xc0012c2630) (0xc001727b80) Stream added, broadcasting: 5
I0104 13:28:03.816299       8 log.go:172] (0xc0012c2630) Reply frame received for 5
I0104 13:28:04.143204       8 log.go:172] (0xc0012c2630) Data frame received for 3
I0104 13:28:04.143247       8 log.go:172] (0xc001727a40) (3) Data frame handling
I0104 13:28:04.143273       8 log.go:172] (0xc001727a40) (3) Data frame sent
I0104 13:28:04.270892       8 log.go:172] (0xc0012c2630) (0xc001727a40) Stream removed, broadcasting: 3
I0104 13:28:04.270987       8 log.go:172] (0xc0012c2630) Data frame received for 1
I0104 13:28:04.271005       8 log.go:172] (0xc0012c2630) (0xc001727b80) Stream removed, broadcasting: 5
I0104 13:28:04.271021       8 log.go:172] (0xc000241a40) (1) Data frame handling
I0104 13:28:04.271029       8 log.go:172] (0xc000241a40) (1) Data frame sent
I0104 13:28:04.271041       8 log.go:172] (0xc0012c2630) (0xc000241a40) Stream removed, broadcasting: 1
I0104 13:28:04.271050       8 log.go:172] (0xc0012c2630) Go away received
I0104 13:28:04.271187       8 log.go:172] (0xc0012c2630) (0xc000241a40) Stream removed, broadcasting: 1
I0104 13:28:04.271207       8 log.go:172] (0xc0012c2630) (0xc001727a40) Stream removed, broadcasting: 3
I0104 13:28:04.271216       8 log.go:172] (0xc0012c2630) (0xc001727b80) Stream removed, broadcasting: 5
Jan  4 13:28:04.271: INFO: Found all expected endpoints: [netserver-0]
Jan  4 13:28:04.324: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6690 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 13:28:04.324: INFO: >>> kubeConfig: /root/.kube/config
I0104 13:28:04.371658       8 log.go:172] (0xc00110afd0) (0xc002f460a0) Create stream
I0104 13:28:04.371684       8 log.go:172] (0xc00110afd0) (0xc002f460a0) Stream added, broadcasting: 1
I0104 13:28:04.375557       8 log.go:172] (0xc00110afd0) Reply frame received for 1
I0104 13:28:04.375577       8 log.go:172] (0xc00110afd0) (0xc0026b8820) Create stream
I0104 13:28:04.375584       8 log.go:172] (0xc00110afd0) (0xc0026b8820) Stream added, broadcasting: 3
I0104 13:28:04.376625       8 log.go:172] (0xc00110afd0) Reply frame received for 3
I0104 13:28:04.376661       8 log.go:172] (0xc00110afd0) (0xc0028f4000) Create stream
I0104 13:28:04.376675       8 log.go:172] (0xc00110afd0) (0xc0028f4000) Stream added, broadcasting: 5
I0104 13:28:04.380942       8 log.go:172] (0xc00110afd0) Reply frame received for 5
I0104 13:28:04.495662       8 log.go:172] (0xc00110afd0) Data frame received for 3
I0104 13:28:04.495700       8 log.go:172] (0xc0026b8820) (3) Data frame handling
I0104 13:28:04.495711       8 log.go:172] (0xc0026b8820) (3) Data frame sent
I0104 13:28:04.768126       8 log.go:172] (0xc00110afd0) (0xc0026b8820) Stream removed, broadcasting: 3
I0104 13:28:04.768251       8 log.go:172] (0xc00110afd0) Data frame received for 1
I0104 13:28:04.768269       8 log.go:172] (0xc002f460a0) (1) Data frame handling
I0104 13:28:04.768302       8 log.go:172] (0xc002f460a0) (1) Data frame sent
I0104 13:28:04.768358       8 log.go:172] (0xc00110afd0) (0xc002f460a0) Stream removed, broadcasting: 1
I0104 13:28:04.768456       8 log.go:172] (0xc00110afd0) (0xc0028f4000) Stream removed, broadcasting: 5
I0104 13:28:04.768497       8 log.go:172] (0xc00110afd0) (0xc002f460a0) Stream removed, broadcasting: 1
I0104 13:28:04.768515       8 log.go:172] (0xc00110afd0) (0xc0026b8820) Stream removed, broadcasting: 3
I0104 13:28:04.768527       8 log.go:172] (0xc00110afd0) (0xc0028f4000) Stream removed, broadcasting: 5
Jan  4 13:28:04.768: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:28:04.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0104 13:28:04.769072       8 log.go:172] (0xc00110afd0) Go away received
STEP: Destroying namespace "pod-network-test-6690" for this suite.
Jan  4 13:28:28.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:28:28.977: INFO: namespace pod-network-test-6690 deletion completed in 24.199573179s

• [SLOW TEST:67.758 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:28:28.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  4 13:28:29.187: INFO: Waiting up to 5m0s for pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d" in namespace "emptydir-8852" to be "success or failure"
Jan  4 13:28:29.201: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.157084ms
Jan  4 13:28:31.212: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024793847s
Jan  4 13:28:33.222: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035028246s
Jan  4 13:28:35.232: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04457951s
Jan  4 13:28:37.240: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052825463s
Jan  4 13:28:39.247: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059549995s
Jan  4 13:28:41.255: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.067738095s
STEP: Saw pod success
Jan  4 13:28:41.255: INFO: Pod "pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d" satisfied condition "success or failure"
Jan  4 13:28:41.260: INFO: Trying to get logs from node iruya-node pod pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d container test-container: 
STEP: delete the pod
Jan  4 13:28:41.539: INFO: Waiting for pod pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d to disappear
Jan  4 13:28:41.547: INFO: Pod pod-77a601d5-9da2-42cd-8adf-dd9a70a8e86d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:28:41.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8852" for this suite.
Jan  4 13:28:47.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:28:47.745: INFO: namespace emptydir-8852 deletion completed in 6.177035014s

• [SLOW TEST:18.767 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:28:47.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-d3e8421a-cc0c-4dae-ada5-66b38485fc6c
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:28:47.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5819" for this suite.
Jan  4 13:28:53.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:28:54.081: INFO: namespace configmap-5819 deletion completed in 6.135592581s

• [SLOW TEST:6.336 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:28:54.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3e1466f1-dad7-4746-9b7e-84a637080660
STEP: Creating a pod to test consume secrets
Jan  4 13:28:54.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b" in namespace "projected-8975" to be "success or failure"
Jan  4 13:28:54.268: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.048199ms
Jan  4 13:28:56.276: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04435776s
Jan  4 13:28:58.288: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056296119s
Jan  4 13:29:00.299: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06701752s
Jan  4 13:29:02.309: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077807443s
Jan  4 13:29:04.363: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131944492s
STEP: Saw pod success
Jan  4 13:29:04.363: INFO: Pod "pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b" satisfied condition "success or failure"
Jan  4 13:29:04.381: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:29:04.598: INFO: Waiting for pod pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b to disappear
Jan  4 13:29:04.624: INFO: Pod pod-projected-secrets-83052896-c5b7-4df2-816d-8dff6798bd8b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:29:04.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8975" for this suite.
Jan  4 13:29:10.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:29:10.981: INFO: namespace projected-8975 deletion completed in 6.345568239s

• [SLOW TEST:16.899 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:29:10.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  4 13:29:11.189: INFO: Waiting up to 5m0s for pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4" in namespace "emptydir-2972" to be "success or failure"
Jan  4 13:29:11.203: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.477106ms
Jan  4 13:29:13.211: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02165162s
Jan  4 13:29:15.217: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027495702s
Jan  4 13:29:18.336: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146488312s
Jan  4 13:29:20.376: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.187064236s
Jan  4 13:29:22.387: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.198248321s
Jan  4 13:29:24.393: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Running", Reason="", readiness=true. Elapsed: 13.203628605s
Jan  4 13:29:26.401: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.211806904s
STEP: Saw pod success
Jan  4 13:29:26.401: INFO: Pod "pod-1493aa41-b720-4431-867c-ffbef30b05f4" satisfied condition "success or failure"
Jan  4 13:29:26.405: INFO: Trying to get logs from node iruya-node pod pod-1493aa41-b720-4431-867c-ffbef30b05f4 container test-container: 
STEP: delete the pod
Jan  4 13:29:26.501: INFO: Waiting for pod pod-1493aa41-b720-4431-867c-ffbef30b05f4 to disappear
Jan  4 13:29:26.514: INFO: Pod pod-1493aa41-b720-4431-867c-ffbef30b05f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:29:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2972" for this suite.
Jan  4 13:29:32.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:29:32.748: INFO: namespace emptydir-2972 deletion completed in 6.222747642s

• [SLOW TEST:21.767 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:29:32.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  4 13:29:41.054: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:29:41.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9584" for this suite.
Jan  4 13:29:47.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:29:47.232: INFO: namespace container-runtime-9584 deletion completed in 6.145501236s

• [SLOW TEST:14.484 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:29:47.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-b2f259e2-fa4d-4299-9c80-603295a2c3c3 in namespace container-probe-3507
Jan  4 13:29:59.430: INFO: Started pod busybox-b2f259e2-fa4d-4299-9c80-603295a2c3c3 in namespace container-probe-3507
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 13:29:59.436: INFO: Initial restart count of pod busybox-b2f259e2-fa4d-4299-9c80-603295a2c3c3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:34:00.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3507" for this suite.
Jan  4 13:34:06.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:34:06.509: INFO: namespace container-probe-3507 deletion completed in 6.17340983s

• [SLOW TEST:259.276 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:34:06.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ac1b291f-3229-4789-8134-e8c8e9db7f31
STEP: Creating a pod to test consume secrets
Jan  4 13:34:06.724: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1" in namespace "projected-8701" to be "success or failure"
Jan  4 13:34:06.742: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066558ms
Jan  4 13:34:08.757: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033303515s
Jan  4 13:34:10.773: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04927759s
Jan  4 13:34:12.783: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059827028s
Jan  4 13:34:14.791: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067064431s
Jan  4 13:34:16.817: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093671681s
STEP: Saw pod success
Jan  4 13:34:16.817: INFO: Pod "pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1" satisfied condition "success or failure"
Jan  4 13:34:16.824: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:34:16.921: INFO: Waiting for pod pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1 to disappear
Jan  4 13:34:16.943: INFO: Pod pod-projected-secrets-a0a5f7f2-4744-48fc-b9a2-9f3ead8b47b1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:34:16.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8701" for this suite.
Jan  4 13:34:23.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:34:23.148: INFO: namespace projected-8701 deletion completed in 6.186321346s

• [SLOW TEST:16.638 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:34:23.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 13:34:23.330: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:34:43.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4747" for this suite.
Jan  4 13:34:49.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:34:49.848: INFO: namespace init-container-4747 deletion completed in 6.240389449s

• [SLOW TEST:26.700 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:34:49.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0104 13:35:30.900134       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:35:30.900: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:35:30.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-934" for this suite.
Jan  4 13:35:50.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:35:51.070: INFO: namespace gc-934 deletion completed in 20.165216384s

• [SLOW TEST:61.222 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:35:51.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-51f7ed17-739e-445e-9c3a-cbc3e66f9550
STEP: Creating a pod to test consume secrets
Jan  4 13:35:51.405: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d" in namespace "projected-1754" to be "success or failure"
Jan  4 13:35:51.420: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.333532ms
Jan  4 13:35:53.484: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078535211s
Jan  4 13:35:55.508: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10251692s
Jan  4 13:35:57.515: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109519814s
Jan  4 13:35:59.592: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186875813s
Jan  4 13:36:01.670: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264862957s
STEP: Saw pod success
Jan  4 13:36:01.670: INFO: Pod "pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d" satisfied condition "success or failure"
Jan  4 13:36:01.675: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:36:01.759: INFO: Waiting for pod pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d to disappear
Jan  4 13:36:01.864: INFO: Pod pod-projected-secrets-c6b78439-84b7-4dfa-bc37-efe7b25d697d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:36:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1754" for this suite.
Jan  4 13:36:07.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:36:08.023: INFO: namespace projected-1754 deletion completed in 6.143450392s

• [SLOW TEST:16.953 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:36:08.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 13:36:08.185: INFO: Waiting up to 5m0s for pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1" in namespace "downward-api-7126" to be "success or failure"
Jan  4 13:36:08.190: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.732531ms
Jan  4 13:36:10.199: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013716007s
Jan  4 13:36:12.206: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020089211s
Jan  4 13:36:14.220: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034662785s
Jan  4 13:36:16.232: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046196456s
STEP: Saw pod success
Jan  4 13:36:16.232: INFO: Pod "downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1" satisfied condition "success or failure"
Jan  4 13:36:16.235: INFO: Trying to get logs from node iruya-node pod downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1 container dapi-container: 
STEP: delete the pod
Jan  4 13:36:16.301: INFO: Waiting for pod downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1 to disappear
Jan  4 13:36:16.308: INFO: Pod downward-api-4a7c6a04-51ee-48ea-a9ca-bb8ee4e05af1 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:36:16.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7126" for this suite.
Jan  4 13:36:22.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:36:22.650: INFO: namespace downward-api-7126 deletion completed in 6.334938265s

• [SLOW TEST:14.626 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:36:22.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4057
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4057
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4057
Jan  4 13:36:22.825: INFO: Found 0 stateful pods, waiting for 1
Jan  4 13:36:32.830: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  4 13:36:32.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 13:36:36.051: INFO: stderr: "I0104 13:36:35.449756    3327 log.go:172] (0xc0009c0420) (0xc000879360) Create stream\nI0104 13:36:35.449804    3327 log.go:172] (0xc0009c0420) (0xc000879360) Stream added, broadcasting: 1\nI0104 13:36:35.456745    3327 log.go:172] (0xc0009c0420) Reply frame received for 1\nI0104 13:36:35.456784    3327 log.go:172] (0xc0009c0420) (0xc0001d6c80) Create stream\nI0104 13:36:35.456802    3327 log.go:172] (0xc0009c0420) (0xc0001d6c80) Stream added, broadcasting: 3\nI0104 13:36:35.460369    3327 log.go:172] (0xc0009c0420) Reply frame received for 3\nI0104 13:36:35.460485    3327 log.go:172] (0xc0009c0420) (0xc00065c000) Create stream\nI0104 13:36:35.460503    3327 log.go:172] (0xc0009c0420) (0xc00065c000) Stream added, broadcasting: 5\nI0104 13:36:35.464943    3327 log.go:172] (0xc0009c0420) Reply frame received for 5\nI0104 13:36:35.674949    3327 log.go:172] (0xc0009c0420) Data frame received for 5\nI0104 13:36:35.674978    3327 log.go:172] (0xc00065c000) (5) Data frame handling\nI0104 13:36:35.674989    3327 log.go:172] (0xc00065c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 13:36:35.856367    3327 log.go:172] (0xc0009c0420) Data frame received for 3\nI0104 13:36:35.856564    3327 log.go:172] (0xc0001d6c80) (3) Data frame handling\nI0104 13:36:35.856620    3327 log.go:172] (0xc0001d6c80) (3) Data frame sent\nI0104 13:36:36.037103    3327 log.go:172] (0xc0009c0420) (0xc0001d6c80) Stream removed, broadcasting: 3\nI0104 13:36:36.037360    3327 log.go:172] (0xc0009c0420) Data frame received for 1\nI0104 13:36:36.037392    3327 log.go:172] (0xc000879360) (1) Data frame handling\nI0104 13:36:36.037481    3327 log.go:172] (0xc000879360) (1) Data frame sent\nI0104 13:36:36.037544    3327 log.go:172] (0xc0009c0420) (0xc000879360) Stream removed, broadcasting: 1\nI0104 13:36:36.037634    3327 log.go:172] (0xc0009c0420) (0xc00065c000) Stream removed, broadcasting: 5\nI0104 13:36:36.037773    3327 log.go:172] (0xc0009c0420) Go away received\nI0104 13:36:36.038758    3327 log.go:172] (0xc0009c0420) (0xc000879360) Stream removed, broadcasting: 1\nI0104 13:36:36.038798    3327 log.go:172] (0xc0009c0420) (0xc0001d6c80) Stream removed, broadcasting: 3\nI0104 13:36:36.038829    3327 log.go:172] (0xc0009c0420) (0xc00065c000) Stream removed, broadcasting: 5\n"
Jan  4 13:36:36.051: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 13:36:36.051: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 13:36:36.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  4 13:36:46.068: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 13:36:46.068: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 13:36:46.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999402s
Jan  4 13:36:47.118: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976486043s
Jan  4 13:36:48.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.968038506s
Jan  4 13:36:49.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.959425706s
Jan  4 13:36:50.150: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.945926197s
Jan  4 13:36:51.234: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.936171667s
Jan  4 13:36:52.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.851963065s
Jan  4 13:36:53.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.829279343s
Jan  4 13:36:54.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.809771637s
Jan  4 13:36:55.304: INFO: Verifying statefulset ss doesn't scale past 1 for another 794.351999ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4057
Jan  4 13:36:56.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 13:36:57.577: INFO: stderr: "I0104 13:36:56.684756    3346 log.go:172] (0xc00013ce70) (0xc0005f08c0) Create stream\nI0104 13:36:56.684910    3346 log.go:172] (0xc00013ce70) (0xc0005f08c0) Stream added, broadcasting: 1\nI0104 13:36:56.701590    3346 log.go:172] (0xc00013ce70) Reply frame received for 1\nI0104 13:36:56.701624    3346 log.go:172] (0xc00013ce70) (0xc00081c000) Create stream\nI0104 13:36:56.701643    3346 log.go:172] (0xc00013ce70) (0xc00081c000) Stream added, broadcasting: 3\nI0104 13:36:56.706848    3346 log.go:172] (0xc00013ce70) Reply frame received for 3\nI0104 13:36:56.706894    3346 log.go:172] (0xc00013ce70) (0xc000746000) Create stream\nI0104 13:36:56.706904    3346 log.go:172] (0xc00013ce70) (0xc000746000) Stream added, broadcasting: 5\nI0104 13:36:56.709245    3346 log.go:172] (0xc00013ce70) Reply frame received for 5\nI0104 13:36:57.270304    3346 log.go:172] (0xc00013ce70) Data frame received for 5\nI0104 13:36:57.270395    3346 log.go:172] (0xc000746000) (5) Data frame handling\nI0104 13:36:57.270411    3346 log.go:172] (0xc000746000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 13:36:57.270443    3346 log.go:172] (0xc00013ce70) Data frame received for 3\nI0104 13:36:57.270450    3346 log.go:172] (0xc00081c000) (3) Data frame handling\nI0104 13:36:57.270466    3346 log.go:172] (0xc00081c000) (3) Data frame sent\nI0104 13:36:57.569105    3346 log.go:172] (0xc00013ce70) (0xc00081c000) Stream removed, broadcasting: 3\nI0104 13:36:57.569236    3346 log.go:172] (0xc00013ce70) Data frame received for 1\nI0104 13:36:57.569253    3346 log.go:172] (0xc0005f08c0) (1) Data frame handling\nI0104 13:36:57.569264    3346 log.go:172] (0xc0005f08c0) (1) Data frame sent\nI0104 13:36:57.569287    3346 log.go:172] (0xc00013ce70) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0104 13:36:57.569340    3346 log.go:172] (0xc00013ce70) (0xc000746000) Stream removed, broadcasting: 5\nI0104 13:36:57.569434    3346 log.go:172] (0xc00013ce70) Go away received\nI0104 13:36:57.570755    3346 log.go:172] (0xc00013ce70) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0104 13:36:57.570774    3346 log.go:172] (0xc00013ce70) (0xc00081c000) Stream removed, broadcasting: 3\nI0104 13:36:57.570786    3346 log.go:172] (0xc00013ce70) (0xc000746000) Stream removed, broadcasting: 5\n"
Jan  4 13:36:57.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 13:36:57.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 13:36:57.595: INFO: Found 1 stateful pods, waiting for 3
Jan  4 13:37:07.606: INFO: Found 2 stateful pods, waiting for 3
Jan  4 13:37:17.607: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 13:37:17.607: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 13:37:17.607: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 13:37:27.610: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 13:37:27.610: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 13:37:27.610: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  4 13:37:27.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 13:37:28.170: INFO: stderr: "I0104 13:37:27.791564    3364 log.go:172] (0xc000128dc0) (0xc0005ec960) Create stream\nI0104 13:37:27.791790    3364 log.go:172] (0xc000128dc0) (0xc0005ec960) Stream added, broadcasting: 1\nI0104 13:37:27.801617    3364 log.go:172] (0xc000128dc0) Reply frame received for 1\nI0104 13:37:27.801730    3364 log.go:172] (0xc000128dc0) (0xc0009d2000) Create stream\nI0104 13:37:27.801765    3364 log.go:172] (0xc000128dc0) (0xc0009d2000) Stream added, broadcasting: 3\nI0104 13:37:27.803352    3364 log.go:172] (0xc000128dc0) Reply frame received for 3\nI0104 13:37:27.803412    3364 log.go:172] (0xc000128dc0) (0xc0009be000) Create stream\nI0104 13:37:27.803443    3364 log.go:172] (0xc000128dc0) (0xc0009be000) Stream added, broadcasting: 5\nI0104 13:37:27.806567    3364 log.go:172] (0xc000128dc0) Reply frame received for 5\nI0104 13:37:28.041305    3364 log.go:172] (0xc000128dc0) Data frame received for 5\nI0104 13:37:28.041361    3364 log.go:172] (0xc0009be000) (5) Data frame handling\nI0104 13:37:28.041373    3364 log.go:172] (0xc0009be000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 13:37:28.041385    3364 log.go:172] (0xc000128dc0) Data frame received for 3\nI0104 13:37:28.041392    3364 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0104 13:37:28.041397    3364 log.go:172] (0xc0009d2000) (3) Data frame sent\nI0104 13:37:28.158444    3364 log.go:172] (0xc000128dc0) Data frame received for 1\nI0104 13:37:28.158614    3364 log.go:172] (0xc000128dc0) (0xc0009d2000) Stream removed, broadcasting: 3\nI0104 13:37:28.158700    3364 log.go:172] (0xc0005ec960) (1) Data frame handling\nI0104 13:37:28.158836    3364 log.go:172] (0xc0005ec960) (1) Data frame sent\nI0104 13:37:28.158906    3364 log.go:172] (0xc000128dc0) (0xc0009be000) Stream removed, broadcasting: 5\nI0104 13:37:28.158966    3364 log.go:172] (0xc000128dc0) (0xc0005ec960) Stream removed, broadcasting: 1\nI0104 13:37:28.159015    3364 log.go:172] (0xc000128dc0) Go away received\nI0104 13:37:28.159474    3364 log.go:172] (0xc000128dc0) (0xc0005ec960) Stream removed, broadcasting: 1\nI0104 13:37:28.159507    3364 log.go:172] (0xc000128dc0) (0xc0009d2000) Stream removed, broadcasting: 3\nI0104 13:37:28.159525    3364 log.go:172] (0xc000128dc0) (0xc0009be000) Stream removed, broadcasting: 5\n"
Jan  4 13:37:28.170: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 13:37:28.171: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 13:37:28.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 13:37:28.803: INFO: stderr: "I0104 13:37:28.440032    3383 log.go:172] (0xc000a5e0b0) (0xc0009b0140) Create stream\nI0104 13:37:28.440140    3383 log.go:172] (0xc000a5e0b0) (0xc0009b0140) Stream added, broadcasting: 1\nI0104 13:37:28.444008    3383 log.go:172] (0xc000a5e0b0) Reply frame received for 1\nI0104 13:37:28.444045    3383 log.go:172] (0xc000a5e0b0) (0xc0003de280) Create stream\nI0104 13:37:28.444053    3383 log.go:172] (0xc000a5e0b0) (0xc0003de280) Stream added, broadcasting: 3\nI0104 13:37:28.445150    3383 log.go:172] (0xc000a5e0b0) Reply frame received for 3\nI0104 13:37:28.445171    3383 log.go:172] (0xc000a5e0b0) (0xc0009b01e0) Create stream\nI0104 13:37:28.445180    3383 log.go:172] (0xc000a5e0b0) (0xc0009b01e0) Stream added, broadcasting: 5\nI0104 13:37:28.446882    3383 log.go:172] (0xc000a5e0b0) Reply frame received for 5\nI0104 13:37:28.620789    3383 log.go:172] (0xc000a5e0b0) Data frame received for 5\nI0104 13:37:28.620810    3383 log.go:172] (0xc0009b01e0) (5) Data frame handling\nI0104 13:37:28.620820    3383 log.go:172] (0xc0009b01e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 13:37:28.709627    3383 log.go:172] (0xc000a5e0b0) Data frame received for 3\nI0104 13:37:28.709680    3383 log.go:172] (0xc0003de280) (3) Data frame handling\nI0104 13:37:28.709710    3383 log.go:172] (0xc0003de280) (3) Data frame sent\nI0104 13:37:28.791572    3383 log.go:172] (0xc000a5e0b0) Data frame received for 1\nI0104 13:37:28.791652    3383 log.go:172] (0xc000a5e0b0) (0xc0009b01e0) Stream removed, broadcasting: 5\nI0104 13:37:28.791715    3383 log.go:172] (0xc0009b0140) (1) Data frame handling\nI0104 13:37:28.791736    3383 log.go:172] (0xc0009b0140) (1) Data frame sent\nI0104 13:37:28.791749    3383 log.go:172] (0xc000a5e0b0) (0xc0003de280) Stream removed, broadcasting: 3\nI0104 13:37:28.791827    3383 log.go:172] (0xc000a5e0b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0104 13:37:28.792972    3383 log.go:172] (0xc000a5e0b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0104 13:37:28.793002    3383 log.go:172] (0xc000a5e0b0) (0xc0003de280) Stream removed, broadcasting: 3\nI0104 13:37:28.793083    3383 log.go:172] (0xc000a5e0b0) (0xc0009b01e0) Stream removed, broadcasting: 5\n"
Jan  4 13:37:28.803: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 13:37:28.803: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 13:37:28.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 13:37:29.562: INFO: stderr: "I0104 13:37:29.075748    3402 log.go:172] (0xc00084a420) (0xc00072c640) Create stream\nI0104 13:37:29.075886    3402 log.go:172] (0xc00084a420) (0xc00072c640) Stream added, broadcasting: 1\nI0104 13:37:29.086524    3402 log.go:172] (0xc00084a420) Reply frame received for 1\nI0104 13:37:29.086636    3402 log.go:172] (0xc00084a420) (0xc0008a4000) Create stream\nI0104 13:37:29.086652    3402 log.go:172] (0xc00084a420) (0xc0008a4000) Stream added, broadcasting: 3\nI0104 13:37:29.088414    3402 log.go:172] (0xc00084a420) Reply frame received for 3\nI0104 13:37:29.088456    3402 log.go:172] (0xc00084a420) (0xc00072c6e0) Create stream\nI0104 13:37:29.088466    3402 log.go:172] (0xc00084a420) (0xc00072c6e0) Stream added, broadcasting: 5\nI0104 13:37:29.089949    3402 log.go:172] (0xc00084a420) Reply frame received for 5\nI0104 13:37:29.254940    3402 log.go:172] (0xc00084a420) Data frame received for 5\nI0104 13:37:29.254995    3402 log.go:172] (0xc00072c6e0) (5) Data frame handling\nI0104 13:37:29.255007    3402 log.go:172] (0xc00072c6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 13:37:29.324137    3402 log.go:172] (0xc00084a420) Data frame received for 3\nI0104 13:37:29.324458    3402 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0104 13:37:29.324555    3402 log.go:172] (0xc0008a4000) (3) Data frame sent\nI0104 13:37:29.546465    3402 log.go:172] (0xc00084a420) Data frame received for 1\nI0104 13:37:29.546668    3402 log.go:172] (0xc00084a420) (0xc0008a4000) Stream removed, broadcasting: 3\nI0104 13:37:29.547226    3402 log.go:172] (0xc00072c640) (1) Data frame handling\nI0104 13:37:29.547296    3402 log.go:172] (0xc00072c640) (1) Data frame sent\nI0104 13:37:29.547463    3402 log.go:172] (0xc00084a420) (0xc00072c6e0) Stream removed, broadcasting: 5\nI0104 13:37:29.547551    3402 log.go:172] (0xc00084a420) (0xc00072c640) Stream removed, broadcasting: 1\nI0104 13:37:29.547575    3402 log.go:172] (0xc00084a420) Go away received\nI0104 13:37:29.549601    3402 log.go:172] (0xc00084a420) (0xc00072c640) Stream removed, broadcasting: 1\nI0104 13:37:29.549626    3402 log.go:172] (0xc00084a420) (0xc0008a4000) Stream removed, broadcasting: 3\nI0104 13:37:29.549642    3402 log.go:172] (0xc00084a420) (0xc00072c6e0) Stream removed, broadcasting: 5\n"
Jan  4 13:37:29.562: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 13:37:29.562: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 13:37:29.562: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 13:37:29.572: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  4 13:37:39.649: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 13:37:39.649: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 13:37:39.649: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 13:37:39.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999977s
Jan  4 13:37:40.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978401066s
Jan  4 13:37:41.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971835689s
Jan  4 13:37:42.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.937046488s
Jan  4 13:37:43.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.929178262s
Jan  4 13:37:44.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.82158461s
Jan  4 13:37:45.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.78893832s
Jan  4 13:37:46.890: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.780182838s
Jan  4 13:37:47.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.769521595s
Jan  4 13:37:48.909: INFO: Verifying statefulset ss doesn't scale past 3 for another 759.881142ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4057
Jan  4 13:37:49.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 13:37:50.467: INFO: stderr: "I0104 13:37:50.150492    3423 log.go:172] (0xc000426420) (0xc0006c46e0) Create stream\nI0104 13:37:50.150728    3423 log.go:172] (0xc000426420) (0xc0006c46e0) Stream added, broadcasting: 1\nI0104 13:37:50.156618    3423 log.go:172] (0xc000426420) Reply frame received for 1\nI0104 13:37:50.156643    3423 log.go:172] (0xc000426420) (0xc0006c4780) Create stream\nI0104 13:37:50.156653    3423 log.go:172] (0xc000426420) (0xc0006c4780) Stream added, broadcasting: 3\nI0104 13:37:50.157964    3423 log.go:172] (0xc000426420) Reply frame received for 3\nI0104 13:37:50.157994    3423 log.go:172] (0xc000426420) (0xc0002b4000) Create stream\nI0104 13:37:50.158010    3423 log.go:172] (0xc000426420) (0xc0002b4000) Stream added, broadcasting: 5\nI0104 13:37:50.159908    3423 log.go:172] (0xc000426420) Reply frame received for 5\nI0104 13:37:50.301586    3423 log.go:172] (0xc000426420) Data frame received for 5\nI0104 13:37:50.301687    3423 log.go:172] (0xc0002b4000) (5) Data frame handling\nI0104 13:37:50.301729    3423 log.go:172] (0xc0002b4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 13:37:50.303821    3423 log.go:172] (0xc000426420) Data frame received for 3\nI0104 13:37:50.303845    3423 log.go:172] (0xc0006c4780) (3) Data frame handling\nI0104 13:37:50.303871    3423 log.go:172] (0xc0006c4780) (3) Data frame sent\nI0104 13:37:50.453658    3423 log.go:172] (0xc000426420) (0xc0006c4780) Stream removed, broadcasting: 3\nI0104 13:37:50.454090    3423 log.go:172] (0xc000426420) Data frame received for 1\nI0104 13:37:50.454126    3423 log.go:172] (0xc0006c46e0) (1) Data frame handling\nI0104 13:37:50.454166    3423 log.go:172] (0xc0006c46e0) (1) Data frame sent\nI0104 13:37:50.454300    3423 log.go:172] (0xc000426420) (0xc0006c46e0) Stream removed, broadcasting: 1\nI0104 13:37:50.454438    3423 log.go:172] (0xc000426420) (0xc0002b4000) Stream removed, broadcasting: 5\nI0104 13:37:50.454611    3423 log.go:172] (0xc000426420) Go away received\nI0104 13:37:50.455031    3423 log.go:172] (0xc000426420) (0xc0006c46e0) Stream removed, broadcasting: 1\nI0104 13:37:50.455054    3423 log.go:172] (0xc000426420) (0xc0006c4780) Stream removed, broadcasting: 3\nI0104 13:37:50.455074    3423 log.go:172] (0xc000426420) (0xc0002b4000) Stream removed, broadcasting: 5\n"
Jan  4 13:37:50.468: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 13:37:50.468: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 13:37:50.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 13:37:50.923: INFO: stderr: "I0104 13:37:50.633077    3446 log.go:172] (0xc000928370) (0xc000910640) Create stream\nI0104 13:37:50.633252    3446 log.go:172] (0xc000928370) (0xc000910640) Stream added, broadcasting: 1\nI0104 13:37:50.638056    3446 log.go:172] (0xc000928370) Reply frame received for 1\nI0104 13:37:50.638082    3446 log.go:172] (0xc000928370) (0xc000508640) Create stream\nI0104 13:37:50.638091    3446 log.go:172] (0xc000928370) (0xc000508640) Stream added, broadcasting: 3\nI0104 13:37:50.639036    3446 log.go:172] (0xc000928370) Reply frame received for 3\nI0104 13:37:50.639062    3446 log.go:172] (0xc000928370) (0xc0002e8000) Create stream\nI0104 13:37:50.639074    3446 log.go:172] (0xc000928370) (0xc0002e8000) Stream added, broadcasting: 5\nI0104 13:37:50.639991    3446 log.go:172] (0xc000928370) Reply frame received for 5\nI0104 13:37:50.755808    3446 log.go:172] (0xc000928370) Data frame received for 5\nI0104 13:37:50.756071    3446 log.go:172] (0xc0002e8000) (5) Data frame handling\nI0104 13:37:50.756105    3446 log.go:172] (0xc0002e8000) (5) Data frame sent\nI0104 13:37:50.756283    3446 log.go:172] (0xc000928370) Data frame received for 3\nI0104 13:37:50.756327    3446 log.go:172] (0xc000508640) (3) Data frame handling\nI0104 13:37:50.756361    3446 log.go:172] (0xc000508640) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 13:37:50.912180    3446 log.go:172] (0xc000928370) Data frame received for 1\nI0104 13:37:50.912299    3446 log.go:172] (0xc000910640) (1) Data frame handling\nI0104 13:37:50.912329    3446 log.go:172] (0xc000910640) (1) Data frame sent\nI0104 13:37:50.912338    3446 log.go:172] (0xc000928370) (0xc000910640) Stream removed, broadcasting: 1\nI0104 13:37:50.912439    3446 log.go:172] (0xc000928370) (0xc000508640) Stream removed, broadcasting: 3\nI0104 13:37:50.912457    3446 log.go:172] (0xc000928370) (0xc0002e8000) Stream removed, broadcasting: 5\nI0104 13:37:50.912476    3446 log.go:172] (0xc000928370) Go away received\nI0104 13:37:50.912637    3446 log.go:172] (0xc000928370) (0xc000910640) Stream removed, broadcasting: 1\nI0104 13:37:50.912649    3446 log.go:172] (0xc000928370) (0xc000508640) Stream removed, broadcasting: 3\nI0104 13:37:50.912662    3446 log.go:172] (0xc000928370) (0xc0002e8000) Stream removed, broadcasting: 5\n"
Jan  4 13:37:50.923: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 13:37:50.923: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 13:37:50.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4057 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 13:37:51.551: INFO: stderr: "I0104 13:37:51.094757    3465 log.go:172] (0xc0007e6370) (0xc0007c0640) Create stream\nI0104 13:37:51.094873    3465 log.go:172] (0xc0007e6370) (0xc0007c0640) Stream added, broadcasting: 1\nI0104 13:37:51.100274    3465 log.go:172] (0xc0007e6370) Reply frame received for 1\nI0104 13:37:51.100300    3465 log.go:172] (0xc0007e6370) (0xc0005720a0) Create stream\nI0104 13:37:51.100306    3465 log.go:172] (0xc0007e6370) (0xc0005720a0) Stream added, broadcasting: 3\nI0104 13:37:51.101860    3465 log.go:172] (0xc0007e6370) Reply frame received for 3\nI0104 13:37:51.101880    3465 log.go:172] (0xc0007e6370) (0xc0006d8000) Create stream\nI0104 13:37:51.101887    3465 log.go:172] (0xc0007e6370) (0xc0006d8000) Stream added, broadcasting: 5\nI0104 13:37:51.103420    3465 log.go:172] (0xc0007e6370) Reply frame received for 5\nI0104 13:37:51.309290    3465 log.go:172] (0xc0007e6370) Data frame received for 5\nI0104 13:37:51.309349    3465 log.go:172] (0xc0006d8000) (5) Data frame handling\nI0104 13:37:51.309383    3465 log.go:172] (0xc0006d8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 13:37:51.309462    3465 log.go:172] (0xc0007e6370) Data frame received for 3\nI0104 13:37:51.309482    3465 log.go:172] (0xc0005720a0) (3) Data frame handling\nI0104 13:37:51.309506    3465 log.go:172] (0xc0005720a0) (3) Data frame sent\nI0104 13:37:51.544355    3465 log.go:172] (0xc0007e6370) Data frame received for 1\nI0104 13:37:51.544589    3465 log.go:172] (0xc0007c0640) (1) Data frame handling\nI0104 13:37:51.544632    3465 log.go:172] (0xc0007c0640) (1) Data frame sent\nI0104 13:37:51.546306    3465 log.go:172] (0xc0007e6370) (0xc0007c0640) Stream removed, broadcasting: 1\nI0104 13:37:51.546472    3465 log.go:172] (0xc0007e6370) (0xc0005720a0) Stream removed, broadcasting: 3\nI0104 13:37:51.546746    3465 log.go:172] (0xc0007e6370) (0xc0006d8000) Stream removed, broadcasting: 5\nI0104 13:37:51.546972    3465 log.go:172] (0xc0007e6370) (0xc0007c0640) Stream removed, broadcasting: 1\nI0104 13:37:51.546990    3465 log.go:172] (0xc0007e6370) (0xc0005720a0) Stream removed, broadcasting: 3\nI0104 13:37:51.546999    3465 log.go:172] (0xc0007e6370) (0xc0006d8000) Stream removed, broadcasting: 5\n"
Jan  4 13:37:51.552: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 13:37:51.552: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 13:37:51.552: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 13:38:21.588: INFO: Deleting all statefulset in ns statefulset-4057
Jan  4 13:38:21.594: INFO: Scaling statefulset ss to 0
Jan  4 13:38:21.607: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 13:38:21.611: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:38:21.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4057" for this suite.
Jan  4 13:38:27.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:38:27.834: INFO: namespace statefulset-4057 deletion completed in 6.158288141s

• [SLOW TEST:125.184 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:38:27.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 13:38:38.612: INFO: Successfully updated pod "annotationupdate31d1cb9c-51a4-4355-bb03-1b853f724bfd"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:38:41.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7580" for this suite.
Jan  4 13:39:03.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:39:03.460: INFO: namespace projected-7580 deletion completed in 22.145115854s

• [SLOW TEST:35.627 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:39:03.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b0293fce-49b7-4bb7-b36b-dd1e37989b58
STEP: Creating a pod to test consume secrets
Jan  4 13:39:03.579: INFO: Waiting up to 5m0s for pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4" in namespace "secrets-8681" to be "success or failure"
Jan  4 13:39:03.586: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712427ms
Jan  4 13:39:05.632: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052887239s
Jan  4 13:39:07.641: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061682922s
Jan  4 13:39:09.667: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087780384s
Jan  4 13:39:11.676: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097084526s
Jan  4 13:39:13.686: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106673216s
Jan  4 13:39:15.695: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.115376589s
STEP: Saw pod success
Jan  4 13:39:15.695: INFO: Pod "pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4" satisfied condition "success or failure"
Jan  4 13:39:15.699: INFO: Trying to get logs from node iruya-node pod pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:39:15.802: INFO: Waiting for pod pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4 to disappear
Jan  4 13:39:15.886: INFO: Pod pod-secrets-563946df-d760-42fd-97f1-57c2a7c01ca4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:39:15.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8681" for this suite.
Jan  4 13:39:21.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:39:22.058: INFO: namespace secrets-8681 deletion completed in 6.158985171s

• [SLOW TEST:18.598 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:39:22.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2ee922b7-69b8-48a2-8af6-54ea5beaf7a7
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2ee922b7-69b8-48a2-8af6-54ea5beaf7a7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:40:40.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7821" for this suite.
Jan  4 13:41:04.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:41:04.734: INFO: namespace projected-7821 deletion completed in 24.117917071s

• [SLOW TEST:102.675 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:41:04.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-fe899c4c-d8ca-4c91-805f-fefa1b132e7d
STEP: Creating a pod to test consume configMaps
Jan  4 13:41:04.912: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4" in namespace "projected-8131" to be "success or failure"
Jan  4 13:41:04.935: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.982063ms
Jan  4 13:41:06.945: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03237282s
Jan  4 13:41:08.962: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049833967s
Jan  4 13:41:10.985: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072928688s
Jan  4 13:41:13.001: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088977037s
Jan  4 13:41:15.011: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098891516s
Jan  4 13:41:17.021: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.109305363s
STEP: Saw pod success
Jan  4 13:41:17.021: INFO: Pod "pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4" satisfied condition "success or failure"
Jan  4 13:41:17.025: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 13:41:17.080: INFO: Waiting for pod pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4 to disappear
Jan  4 13:41:17.100: INFO: Pod pod-projected-configmaps-021c883b-f62f-4c34-a049-5e6a00ca40b4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:41:17.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8131" for this suite.
Jan  4 13:41:23.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:41:23.444: INFO: namespace projected-8131 deletion completed in 6.339575821s

• [SLOW TEST:18.710 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:41:23.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:41:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5163" for this suite.
Jan  4 13:41:45.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:41:46.084: INFO: namespace pods-5163 deletion completed in 22.351209311s

• [SLOW TEST:22.640 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:41:46.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-942d2341-4ceb-477f-a818-2db81fd5cbc3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:41:46.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7167" for this suite.
Jan  4 13:41:52.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:41:52.594: INFO: namespace secrets-7167 deletion completed in 6.218281877s

• [SLOW TEST:6.508 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:41:52.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3051/configmap-test-a52164fd-c5e3-4e61-a474-e62b9400b8df
STEP: Creating a pod to test consume configMaps
Jan  4 13:41:52.808: INFO: Waiting up to 5m0s for pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba" in namespace "configmap-3051" to be "success or failure"
Jan  4 13:41:52.819: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.806297ms
Jan  4 13:41:54.829: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02069319s
Jan  4 13:41:56.835: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026884343s
Jan  4 13:41:58.851: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042759377s
Jan  4 13:42:00.866: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057631385s
Jan  4 13:42:02.893: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.085154572s
Jan  4 13:42:04.909: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.100501508s
Jan  4 13:42:07.109: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.300408638s
STEP: Saw pod success
Jan  4 13:42:07.109: INFO: Pod "pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba" satisfied condition "success or failure"
Jan  4 13:42:07.113: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba container env-test: 
STEP: delete the pod
Jan  4 13:42:07.411: INFO: Waiting for pod pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba to disappear
Jan  4 13:42:07.599: INFO: Pod pod-configmaps-2816507e-67e5-4c81-b05d-5468157b35ba no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:42:07.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3051" for this suite.
Jan  4 13:42:13.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:42:13.761: INFO: namespace configmap-3051 deletion completed in 6.153562183s

• [SLOW TEST:21.167 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:42:13.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  4 13:42:13.936: INFO: Waiting up to 5m0s for pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9" in namespace "containers-7799" to be "success or failure"
Jan  4 13:42:13.950: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.980495ms
Jan  4 13:42:15.958: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021332741s
Jan  4 13:42:17.966: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029579094s
Jan  4 13:42:19.973: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036602113s
Jan  4 13:42:21.980: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044047295s
Jan  4 13:42:23.992: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055563221s
STEP: Saw pod success
Jan  4 13:42:23.992: INFO: Pod "client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9" satisfied condition "success or failure"
Jan  4 13:42:23.995: INFO: Trying to get logs from node iruya-node pod client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9 container test-container: 
STEP: delete the pod
Jan  4 13:42:24.117: INFO: Waiting for pod client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9 to disappear
Jan  4 13:42:24.289: INFO: Pod client-containers-7db57c64-c331-41df-ac87-4f2a21aab3f9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:42:24.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7799" for this suite.
Jan  4 13:42:30.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:42:30.516: INFO: namespace containers-7799 deletion completed in 6.218347911s

• [SLOW TEST:16.754 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:42:30.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:42:44.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1670" for this suite.
Jan  4 13:43:07.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:43:07.121: INFO: namespace replication-controller-1670 deletion completed in 22.134818255s

• [SLOW TEST:36.605 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:43:07.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:43:07.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7" in namespace "downward-api-9385" to be "success or failure"
Jan  4 13:43:07.221: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.234385ms
Jan  4 13:43:09.232: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051446824s
Jan  4 13:43:11.249: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067749831s
Jan  4 13:43:13.256: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074580846s
Jan  4 13:43:15.261: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07983313s
Jan  4 13:43:17.267: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.085848271s
Jan  4 13:43:19.279: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.09784933s
STEP: Saw pod success
Jan  4 13:43:19.279: INFO: Pod "downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7" satisfied condition "success or failure"
Jan  4 13:43:19.282: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7 container client-container: 
STEP: delete the pod
Jan  4 13:43:19.363: INFO: Waiting for pod downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7 to disappear
Jan  4 13:43:19.371: INFO: Pod downwardapi-volume-e2c2df60-b851-498f-8775-fda027cba1a7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:43:19.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9385" for this suite.
Jan  4 13:43:27.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:43:27.551: INFO: namespace downward-api-9385 deletion completed in 8.173253272s

• [SLOW TEST:20.430 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:43:27.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-ba818cde-a6e6-4281-96e1-babe6824bdb4
STEP: Creating secret with name s-test-opt-upd-33db5a14-1fb4-45b6-bb4d-eeea27cfe4c3
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ba818cde-a6e6-4281-96e1-babe6824bdb4
STEP: Updating secret s-test-opt-upd-33db5a14-1fb4-45b6-bb4d-eeea27cfe4c3
STEP: Creating secret with name s-test-opt-create-c356c1e0-d442-4ee6-a4d6-f7d0461f3c93
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:45:08.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2976" for this suite.
Jan  4 13:45:32.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:45:32.573: INFO: namespace secrets-2976 deletion completed in 24.183535607s

• [SLOW TEST:125.022 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:45:32.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:45:48.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5482" for this suite.
Jan  4 13:45:54.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:45:55.073: INFO: namespace kubelet-test-5482 deletion completed in 6.227489297s

• [SLOW TEST:22.499 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:45:55.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:45:55.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148'
Jan  4 13:45:55.719: INFO: stderr: ""
Jan  4 13:45:55.719: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  4 13:45:55.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148'
Jan  4 13:45:56.331: INFO: stderr: ""
Jan  4 13:45:56.331: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 13:45:57.704: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:45:57.704: INFO: Found 0 / 1
Jan  4 13:45:58.346: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:45:58.346: INFO: Found 0 / 1
Jan  4 13:45:59.344: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:45:59.344: INFO: Found 0 / 1
Jan  4 13:46:00.345: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:00.346: INFO: Found 0 / 1
Jan  4 13:46:01.341: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:01.341: INFO: Found 0 / 1
Jan  4 13:46:02.369: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:02.369: INFO: Found 0 / 1
Jan  4 13:46:03.337: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:03.337: INFO: Found 0 / 1
Jan  4 13:46:04.364: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:04.364: INFO: Found 0 / 1
Jan  4 13:46:05.336: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:05.336: INFO: Found 0 / 1
Jan  4 13:46:06.395: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:06.395: INFO: Found 0 / 1
Jan  4 13:46:07.343: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:07.343: INFO: Found 0 / 1
Jan  4 13:46:08.337: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:08.337: INFO: Found 0 / 1
Jan  4 13:46:09.338: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:09.338: INFO: Found 1 / 1
Jan  4 13:46:09.338: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  4 13:46:09.340: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:46:09.340: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 13:46:09.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xtdtz --namespace=kubectl-1148'
Jan  4 13:46:09.439: INFO: stderr: ""
Jan  4 13:46:09.439: INFO: stdout: "Name:           redis-master-xtdtz\nNamespace:      kubectl-1148\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 04 Jan 2020 13:45:55 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://b2baf16dbff57550de1ebabe029e4a52625bb097706e85966f3c09621d07aa9f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 04 Jan 2020 13:46:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fdgwm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-fdgwm:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-fdgwm\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  14s   default-scheduler    Successfully assigned kubectl-1148/redis-master-xtdtz to iruya-node\n  Normal  Pulled     8s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Jan  4 13:46:09.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1148'
Jan  4 13:46:09.568: INFO: stderr: ""
Jan  4 13:46:09.568: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1148\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  14s   replication-controller  Created pod: redis-master-xtdtz\n"
Jan  4 13:46:09.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1148'
Jan  4 13:46:09.783: INFO: stderr: ""
Jan  4 13:46:09.784: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1148\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.255.187\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  4 13:46:09.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  4 13:46:10.033: INFO: stderr: ""
Jan  4 13:46:10.033: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 04 Jan 2020 13:45:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 04 Jan 2020 13:45:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 04 Jan 2020 13:45:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 04 Jan 2020 13:45:33 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         153d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         84d\n  kubectl-1148               redis-master-xtdtz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  4 13:46:10.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1148'
Jan  4 13:46:10.182: INFO: stderr: ""
Jan  4 13:46:10.182: INFO: stdout: "Name:         kubectl-1148\nLabels:       e2e-framework=kubectl\n              e2e-run=82b82fc6-6f90-4890-a172-e46bba02a8db\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:46:10.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1148" for this suite.
Jan  4 13:46:50.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:46:50.392: INFO: namespace kubectl-1148 deletion completed in 40.202942948s

• [SLOW TEST:55.319 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:46:50.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:46:50.431: INFO: Creating deployment "test-recreate-deployment"
Jan  4 13:46:50.435: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  4 13:46:50.564: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  4 13:46:52.578: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  4 13:46:52.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:46:54.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:46:56.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:46:58.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:47:00.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:47:02.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742410, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:47:04.604: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  4 13:47:04.627: INFO: Updating deployment test-recreate-deployment
Jan  4 13:47:04.627: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  4 13:47:06.305: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-34,SelfLink:/apis/apps/v1/namespaces/deployment-34/deployments/test-recreate-deployment,UID:65b141c3-7d94-409c-a3d1-2bb9f277cdf3,ResourceVersion:19271664,Generation:2,CreationTimestamp:2020-01-04 13:46:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-04 13:47:06 +0000 UTC 2020-01-04 13:47:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-04 13:47:06 +0000 UTC 2020-01-04 13:46:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  4 13:47:06.425: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-34,SelfLink:/apis/apps/v1/namespaces/deployment-34/replicasets/test-recreate-deployment-5c8c9cc69d,UID:cb9fd50e-3514-4b9e-ae0e-de7e21722d9d,ResourceVersion:19271662,Generation:1,CreationTimestamp:2020-01-04 13:47:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 65b141c3-7d94-409c-a3d1-2bb9f277cdf3 0xc001592c57 0xc001592c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:47:06.425: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  4 13:47:06.425: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-34,SelfLink:/apis/apps/v1/namespaces/deployment-34/replicasets/test-recreate-deployment-6df85df6b9,UID:8188b5c0-ee0b-45db-918b-7956555dfadb,ResourceVersion:19271653,Generation:2,CreationTimestamp:2020-01-04 13:46:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 65b141c3-7d94-409c-a3d1-2bb9f277cdf3 0xc001592d97 0xc001592d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:47:06.433: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5g5sg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5g5sg,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-34,SelfLink:/api/v1/namespaces/deployment-34/pods/test-recreate-deployment-5c8c9cc69d-5g5sg,UID:a707a8db-24e5-437d-8ffa-64b510c3c53a,ResourceVersion:19271659,Generation:0,CreationTimestamp:2020-01-04 13:47:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d cb9fd50e-3514-4b9e-ae0e-de7e21722d9d 0xc00061dea7 0xc00061dea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-84vwk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-84vwk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-84vwk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00061df20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00061df40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:47:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:47:06.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-34" for this suite.
Jan  4 13:47:14.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:47:14.672: INFO: namespace deployment-34 deletion completed in 8.235175591s

• [SLOW TEST:24.280 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:47:14.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-a4340440-b9cb-4f91-b0a3-8c7d7baed4ac
STEP: Creating a pod to test consume configMaps
Jan  4 13:47:14.945: INFO: Waiting up to 5m0s for pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96" in namespace "configmap-6051" to be "success or failure"
Jan  4 13:47:14.985: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 40.013879ms
Jan  4 13:47:17.005: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05956952s
Jan  4 13:47:19.011: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065830653s
Jan  4 13:47:21.018: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072862182s
Jan  4 13:47:23.031: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086093323s
Jan  4 13:47:25.062: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11639443s
Jan  4 13:47:27.069: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 12.123421094s
Jan  4 13:47:29.083: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Pending", Reason="", readiness=false. Elapsed: 14.137594439s
Jan  4 13:47:31.091: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.145964371s
STEP: Saw pod success
Jan  4 13:47:31.091: INFO: Pod "pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96" satisfied condition "success or failure"
Jan  4 13:47:31.094: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96 container configmap-volume-test: 
STEP: delete the pod
Jan  4 13:47:31.127: INFO: Waiting for pod pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96 to disappear
Jan  4 13:47:31.175: INFO: Pod pod-configmaps-4cebfe5d-d828-4ed0-9dbe-55a5240b6b96 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:47:31.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6051" for this suite.
Jan  4 13:47:37.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:47:37.631: INFO: namespace configmap-6051 deletion completed in 6.434866397s

• [SLOW TEST:22.958 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:47:37.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:47:37.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c" in namespace "projected-9102" to be "success or failure"
Jan  4 13:47:37.809: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.677228ms
Jan  4 13:47:39.823: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032803075s
Jan  4 13:47:41.829: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038979901s
Jan  4 13:47:43.837: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047095501s
Jan  4 13:47:45.842: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052157816s
Jan  4 13:47:47.868: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077919206s
Jan  4 13:47:49.968: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.177697931s
STEP: Saw pod success
Jan  4 13:47:49.968: INFO: Pod "downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c" satisfied condition "success or failure"
Jan  4 13:47:49.974: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c container client-container: 
STEP: delete the pod
Jan  4 13:47:50.030: INFO: Waiting for pod downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c to disappear
Jan  4 13:47:50.126: INFO: Pod downwardapi-volume-075148c6-c038-4fa3-bec1-31fbaa92f41c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:47:50.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9102" for this suite.
Jan  4 13:47:56.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:47:56.270: INFO: namespace projected-9102 deletion completed in 6.137654609s

• [SLOW TEST:18.638 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:47:56.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  4 13:47:56.361: INFO: Waiting up to 5m0s for pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86" in namespace "var-expansion-269" to be "success or failure"
Jan  4 13:47:56.421: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 58.856098ms
Jan  4 13:47:58.429: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067275321s
Jan  4 13:48:00.438: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075961776s
Jan  4 13:48:02.444: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082559447s
Jan  4 13:48:04.524: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162373524s
Jan  4 13:48:06.539: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17750049s
Jan  4 13:48:08.551: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Pending", Reason="", readiness=false. Elapsed: 12.189311684s
Jan  4 13:48:10.579: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.217640057s
STEP: Saw pod success
Jan  4 13:48:10.579: INFO: Pod "var-expansion-ea285f33-724b-4224-926e-aa536ec9de86" satisfied condition "success or failure"
Jan  4 13:48:10.592: INFO: Trying to get logs from node iruya-node pod var-expansion-ea285f33-724b-4224-926e-aa536ec9de86 container dapi-container: 
STEP: delete the pod
Jan  4 13:48:11.209: INFO: Waiting for pod var-expansion-ea285f33-724b-4224-926e-aa536ec9de86 to disappear
Jan  4 13:48:11.222: INFO: Pod var-expansion-ea285f33-724b-4224-926e-aa536ec9de86 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:48:11.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-269" for this suite.
Jan  4 13:48:17.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:48:17.539: INFO: namespace var-expansion-269 deletion completed in 6.303913882s

• [SLOW TEST:21.270 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:48:17.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3e10ba65-2d72-41dd-a833-311504dd2ca7
STEP: Creating a pod to test consume secrets
Jan  4 13:48:17.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5" in namespace "projected-7223" to be "success or failure"
Jan  4 13:48:17.692: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.741746ms
Jan  4 13:48:19.702: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022042396s
Jan  4 13:48:21.708: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028135318s
Jan  4 13:48:23.734: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053886909s
Jan  4 13:48:25.742: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062630278s
Jan  4 13:48:27.773: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093293278s
STEP: Saw pod success
Jan  4 13:48:27.773: INFO: Pod "pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5" satisfied condition "success or failure"
Jan  4 13:48:27.786: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:48:28.054: INFO: Waiting for pod pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5 to disappear
Jan  4 13:48:28.074: INFO: Pod pod-projected-secrets-279e41a2-e575-4846-9566-63f7059600f5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:48:28.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7223" for this suite.
Jan  4 13:48:34.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:48:34.214: INFO: namespace projected-7223 deletion completed in 6.132500028s

• [SLOW TEST:16.675 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:48:34.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 13:48:50.504: INFO: File wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-822f8c09-394f-4ac6-b9db-a897ce9c9ffa contains '' instead of 'foo.example.com.'
Jan  4 13:48:50.519: INFO: File jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-822f8c09-394f-4ac6-b9db-a897ce9c9ffa contains '' instead of 'foo.example.com.'
Jan  4 13:48:50.519: INFO: Lookups using dns-4783/dns-test-822f8c09-394f-4ac6-b9db-a897ce9c9ffa failed for: [wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local]

Jan  4 13:48:55.542: INFO: DNS probes using dns-test-822f8c09-394f-4ac6-b9db-a897ce9c9ffa succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 13:49:19.896: INFO: File wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-e707d800-c1b1-4ea8-96fa-c5a84641c6e2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  4 13:49:19.904: INFO: File jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-e707d800-c1b1-4ea8-96fa-c5a84641c6e2 contains '' instead of 'bar.example.com.'
Jan  4 13:49:19.904: INFO: Lookups using dns-4783/dns-test-e707d800-c1b1-4ea8-96fa-c5a84641c6e2 failed for: [wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local]

Jan  4 13:49:25.034: INFO: DNS probes using dns-test-e707d800-c1b1-4ea8-96fa-c5a84641c6e2 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4783.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 13:49:45.480: INFO: File wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-3bb74571-4f2a-482d-aad8-d597f86487b2 contains '' instead of '10.100.249.86'
Jan  4 13:49:45.504: INFO: File jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local from pod  dns-4783/dns-test-3bb74571-4f2a-482d-aad8-d597f86487b2 contains '' instead of '10.100.249.86'
Jan  4 13:49:45.505: INFO: Lookups using dns-4783/dns-test-3bb74571-4f2a-482d-aad8-d597f86487b2 failed for: [wheezy_udp@dns-test-service-3.dns-4783.svc.cluster.local jessie_udp@dns-test-service-3.dns-4783.svc.cluster.local]

Jan  4 13:49:50.561: INFO: DNS probes using dns-test-3bb74571-4f2a-482d-aad8-d597f86487b2 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:49:50.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4783" for this suite.
Jan  4 13:50:00.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:50:00.925: INFO: namespace dns-4783 deletion completed in 10.249949861s

• [SLOW TEST:86.711 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:50:00.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  4 13:50:13.116: INFO: 10 pods remaining
Jan  4 13:50:13.116: INFO: 10 pods has nil DeletionTimestamp
Jan  4 13:50:13.117: INFO: 
Jan  4 13:50:14.493: INFO: 8 pods remaining
Jan  4 13:50:14.493: INFO: 0 pods has nil DeletionTimestamp
Jan  4 13:50:14.493: INFO: 
STEP: Gathering metrics
W0104 13:50:15.598745       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:50:15.598: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:50:15.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-208" for this suite.
Jan  4 13:50:29.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:50:30.031: INFO: namespace gc-208 deletion completed in 14.427823889s

• [SLOW TEST:29.105 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:50:30.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 13:50:30.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3848'
Jan  4 13:50:32.646: INFO: stderr: ""
Jan  4 13:50:32.646: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  4 13:50:47.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3848 -o json'
Jan  4 13:50:47.838: INFO: stderr: ""
Jan  4 13:50:47.838: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-04T13:50:32Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3848\",\n        \"resourceVersion\": \"19272293\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3848/pods/e2e-test-nginx-pod\",\n        \"uid\": \"965b6670-e116-4192-a9b1-7ebcbaf08c7a\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-4w9w5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-4w9w5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-4w9w5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T13:50:32Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T13:50:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T13:50:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T13:50:32Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://12751a5b76d4435fec7c6a9a8e3818633be03cf2dfe10f60874b16c8e6ce16a1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-04T13:50:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-04T13:50:32Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  4 13:50:47.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3848'
Jan  4 13:50:48.193: INFO: stderr: ""
Jan  4 13:50:48.194: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  4 13:50:48.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3848'
Jan  4 13:51:01.896: INFO: stderr: ""
Jan  4 13:51:01.897: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:51:01.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3848" for this suite.
Jan  4 13:51:10.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:51:10.106: INFO: namespace kubectl-3848 deletion completed in 8.198503112s

• [SLOW TEST:40.075 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:51:10.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  4 13:51:10.170: INFO: Waiting up to 5m0s for pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d" in namespace "var-expansion-5934" to be "success or failure"
Jan  4 13:51:10.173: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765123ms
Jan  4 13:51:12.187: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016973035s
Jan  4 13:51:14.203: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033452999s
Jan  4 13:51:16.210: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04009115s
Jan  4 13:51:18.219: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048657012s
Jan  4 13:51:20.286: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Running", Reason="", readiness=true. Elapsed: 10.116489538s
Jan  4 13:51:22.296: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.125708107s
STEP: Saw pod success
Jan  4 13:51:22.296: INFO: Pod "var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d" satisfied condition "success or failure"
Jan  4 13:51:22.299: INFO: Trying to get logs from node iruya-node pod var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d container dapi-container: 
STEP: delete the pod
Jan  4 13:51:22.445: INFO: Waiting for pod var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d to disappear
Jan  4 13:51:22.462: INFO: Pod var-expansion-4372d0ae-6c5f-4582-8910-2fb9b2a2833d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:51:22.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5934" for this suite.
Jan  4 13:51:28.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:51:28.732: INFO: namespace var-expansion-5934 deletion completed in 6.209761456s

• [SLOW TEST:18.626 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:51:28.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  4 13:51:28.966: INFO: Waiting up to 5m0s for pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154" in namespace "containers-4103" to be "success or failure"
Jan  4 13:51:29.071: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 104.94417ms
Jan  4 13:51:31.953: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986598728s
Jan  4 13:51:33.961: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 4.994165251s
Jan  4 13:51:36.026: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 7.059893336s
Jan  4 13:51:38.039: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 9.072955513s
Jan  4 13:51:40.045: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Pending", Reason="", readiness=false. Elapsed: 11.079036139s
Jan  4 13:51:42.061: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.09452327s
STEP: Saw pod success
Jan  4 13:51:42.061: INFO: Pod "client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154" satisfied condition "success or failure"
Jan  4 13:51:42.078: INFO: Trying to get logs from node iruya-node pod client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154 container test-container: 
STEP: delete the pod
Jan  4 13:51:42.172: INFO: Waiting for pod client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154 to disappear
Jan  4 13:51:42.249: INFO: Pod client-containers-e9b1e3cb-0b31-47b0-8d5c-ac628d71a154 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:51:42.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4103" for this suite.
Jan  4 13:51:48.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:51:48.451: INFO: namespace containers-4103 deletion completed in 6.187762709s

• [SLOW TEST:19.719 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:51:48.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 13:51:48.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9064'
Jan  4 13:51:48.889: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 13:51:48.890: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  4 13:51:50.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9064'
Jan  4 13:51:51.218: INFO: stderr: ""
Jan  4 13:51:51.218: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:51:51.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9064" for this suite.
Jan  4 13:52:13.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:52:13.400: INFO: namespace kubectl-9064 deletion completed in 22.173885154s

• [SLOW TEST:24.949 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:52:13.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2897ccce-0398-4775-beb7-a4627113927f
STEP: Creating a pod to test consume configMaps
Jan  4 13:52:13.686: INFO: Waiting up to 5m0s for pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653" in namespace "configmap-83" to be "success or failure"
Jan  4 13:52:13.696: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 9.434274ms
Jan  4 13:52:15.703: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016364236s
Jan  4 13:52:17.710: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023220807s
Jan  4 13:52:19.719: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032676992s
Jan  4 13:52:21.730: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043326353s
Jan  4 13:52:23.740: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053874756s
Jan  4 13:52:25.771: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084559347s
STEP: Saw pod success
Jan  4 13:52:25.771: INFO: Pod "pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653" satisfied condition "success or failure"
Jan  4 13:52:25.783: INFO: Trying to get logs from node iruya-node pod pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653 container configmap-volume-test: 
STEP: delete the pod
Jan  4 13:52:26.007: INFO: Waiting for pod pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653 to disappear
Jan  4 13:52:26.015: INFO: Pod pod-configmaps-90659b87-f9c3-425b-b742-a4cc0e97b653 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:52:26.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-83" for this suite.
Jan  4 13:52:32.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:52:32.221: INFO: namespace configmap-83 deletion completed in 6.198593961s

• [SLOW TEST:18.821 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:52:32.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 13:52:32.460: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:52:55.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5295" for this suite.
Jan  4 13:53:19.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:53:19.894: INFO: namespace init-container-5295 deletion completed in 24.174394376s

• [SLOW TEST:47.673 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:53:19.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:53:20.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6" in namespace "projected-2842" to be "success or failure"
Jan  4 13:53:20.081: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.62702ms
Jan  4 13:53:22.663: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59396276s
Jan  4 13:53:24.671: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60178199s
Jan  4 13:53:26.683: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613995136s
Jan  4 13:53:28.700: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630483017s
Jan  4 13:53:30.708: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Running", Reason="", readiness=true. Elapsed: 10.638858037s
Jan  4 13:53:32.716: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.646246627s
STEP: Saw pod success
Jan  4 13:53:32.716: INFO: Pod "downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6" satisfied condition "success or failure"
Jan  4 13:53:32.720: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6 container client-container: 
STEP: delete the pod
Jan  4 13:53:33.407: INFO: Waiting for pod downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6 to disappear
Jan  4 13:53:33.438: INFO: Pod downwardapi-volume-705e7397-378c-451b-81d6-85fc372386b6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:53:33.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2842" for this suite.
Jan  4 13:53:39.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:53:39.842: INFO: namespace projected-2842 deletion completed in 6.395608964s

• [SLOW TEST:19.947 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:53:39.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:53:40.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed" in namespace "projected-1944" to be "success or failure"
Jan  4 13:53:40.020: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.354355ms
Jan  4 13:53:42.069: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060896084s
Jan  4 13:53:44.084: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075380992s
Jan  4 13:53:46.089: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080680064s
Jan  4 13:53:48.097: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087982611s
Jan  4 13:53:50.110: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101667398s
STEP: Saw pod success
Jan  4 13:53:50.110: INFO: Pod "downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed" satisfied condition "success or failure"
Jan  4 13:53:50.116: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed container client-container: 
STEP: delete the pod
Jan  4 13:53:50.209: INFO: Waiting for pod downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed to disappear
Jan  4 13:53:50.215: INFO: Pod downwardapi-volume-ad89ec57-d1c7-416c-ba75-a2dc407effed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:53:50.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1944" for this suite.
Jan  4 13:53:56.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:53:56.365: INFO: namespace projected-1944 deletion completed in 6.143292111s

• [SLOW TEST:16.523 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:53:56.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 13:53:56.581: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:53:57.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2551" for this suite.
Jan  4 13:54:03.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:54:03.915: INFO: namespace custom-resource-definition-2551 deletion completed in 6.156187143s

• [SLOW TEST:7.550 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 13:54:03.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  4 13:54:04.029: INFO: Waiting up to 5m0s for pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2" in namespace "containers-8374" to be "success or failure"
Jan  4 13:54:04.036: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388631ms
Jan  4 13:54:06.042: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012846532s
Jan  4 13:54:08.051: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021951008s
Jan  4 13:54:10.058: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028278305s
Jan  4 13:54:12.065: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035358337s
Jan  4 13:54:14.072: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042267548s
STEP: Saw pod success
Jan  4 13:54:14.072: INFO: Pod "client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2" satisfied condition "success or failure"
Jan  4 13:54:14.075: INFO: Trying to get logs from node iruya-node pod client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2 container test-container: 
STEP: delete the pod
Jan  4 13:54:14.320: INFO: Waiting for pod client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2 to disappear
Jan  4 13:54:14.327: INFO: Pod client-containers-2a2b1b8c-3f32-4cb1-ae6d-c84970f847a2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 13:54:14.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8374" for this suite.
Jan  4 13:54:20.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:54:20.635: INFO: namespace containers-8374 deletion completed in 6.2992694s

• [SLOW TEST:16.719 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSJan  4 13:54:20.636: INFO: Running AfterSuite actions on all nodes
Jan  4 13:54:20.636: INFO: Running AfterSuite actions on node 1
Jan  4 13:54:20.636: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8672.801 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS