I0513 21:10:37.744198 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0513 21:10:37.744538 6 e2e.go:109] Starting e2e run "d033ebe1-a1df-4403-9844-2873134c9854" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589404236 - Will randomize all specs Will run 278 of 4842 specs May 13 21:10:37.798: INFO: >>> kubeConfig: /root/.kube/config May 13 21:10:37.803: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 21:10:37.826: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 21:10:37.853: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 21:10:37.853: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 13 21:10:37.853: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 21:10:37.860: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 13 21:10:37.860: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 21:10:37.860: INFO: e2e test version: v1.17.4 May 13 21:10:37.861: INFO: kube-apiserver version: v1.17.2 May 13 21:10:37.861: INFO: >>> kubeConfig: /root/.kube/config May 13 21:10:37.866: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:10:37.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 13 21:10:37.944: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-dd7bdcfb-a718-4cf2-8c6c-9fd07df21935 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:10:44.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2831" for this suite. • [SLOW TEST:6.149 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:10:44.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 13 21:10:44.077: INFO: Waiting up to 5m0s for pod "pod-38934536-2c4f-4958-817f-093cfa5463ad" in namespace "emptydir-6367" to be "success or failure" May 13 21:10:44.082: INFO: Pod "pod-38934536-2c4f-4958-817f-093cfa5463ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678979ms May 13 21:10:46.347: INFO: Pod "pod-38934536-2c4f-4958-817f-093cfa5463ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269897258s May 13 21:10:48.351: INFO: Pod "pod-38934536-2c4f-4958-817f-093cfa5463ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.273675539s May 13 21:10:50.355: INFO: Pod "pod-38934536-2c4f-4958-817f-093cfa5463ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277365719s STEP: Saw pod success May 13 21:10:50.355: INFO: Pod "pod-38934536-2c4f-4958-817f-093cfa5463ad" satisfied condition "success or failure" May 13 21:10:50.357: INFO: Trying to get logs from node jerma-worker pod pod-38934536-2c4f-4958-817f-093cfa5463ad container test-container: STEP: delete the pod May 13 21:10:50.433: INFO: Waiting for pod pod-38934536-2c4f-4958-817f-093cfa5463ad to disappear May 13 21:10:50.443: INFO: Pod pod-38934536-2c4f-4958-817f-093cfa5463ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:10:50.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6367" for this suite. • [SLOW TEST:6.433 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":68,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:10:50.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3488 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-3488 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3488 May 13 21:10:50.640: INFO: Found 0 stateful pods, waiting for 1 May 13 21:11:00.644: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 13 21:11:00.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:11:03.513: INFO: stderr: "I0513 21:11:03.382103 27 log.go:172] (0xc0008f0c60) (0xc00070dc20) Create stream\nI0513 21:11:03.382158 27 log.go:172] (0xc0008f0c60) (0xc00070dc20) Stream added, broadcasting: 1\nI0513 21:11:03.384673 27 log.go:172] (0xc0008f0c60) Reply frame received for 1\nI0513 21:11:03.384715 27 log.go:172] (0xc0008f0c60) (0xc00070dcc0) Create stream\nI0513 21:11:03.384726 27 log.go:172] (0xc0008f0c60) (0xc00070dcc0) Stream added, broadcasting: 3\nI0513 21:11:03.385711 27 log.go:172] (0xc0008f0c60) Reply frame received for 3\nI0513 21:11:03.385741 27 log.go:172] (0xc0008f0c60) (0xc00070dd60) Create stream\nI0513 21:11:03.385749 27 log.go:172] (0xc0008f0c60) (0xc00070dd60) Stream added, broadcasting: 5\nI0513 21:11:03.386603 27 log.go:172] (0xc0008f0c60) Reply frame received for 5\nI0513 21:11:03.478377 27 log.go:172] (0xc0008f0c60) Data frame received for 5\nI0513 21:11:03.478411 27 log.go:172] (0xc00070dd60) (5) Data frame handling\nI0513 21:11:03.478433 27 log.go:172] (0xc00070dd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:11:03.504237 27 log.go:172] (0xc0008f0c60) Data frame received for 3\nI0513 21:11:03.504273 27 log.go:172] (0xc00070dcc0) (3) Data frame handling\nI0513 21:11:03.504303 27 log.go:172] (0xc00070dcc0) (3) Data frame sent\nI0513 21:11:03.504314 27 log.go:172] (0xc0008f0c60) Data frame received for 3\nI0513 21:11:03.504321 27 log.go:172] (0xc00070dcc0) (3) Data frame handling\nI0513 21:11:03.504435 27 log.go:172] (0xc0008f0c60) Data frame received for 5\nI0513 21:11:03.504456 27 log.go:172] (0xc00070dd60) (5) Data frame handling\nI0513 21:11:03.507352 27 log.go:172] (0xc0008f0c60) Data frame received for 1\nI0513 21:11:03.507371 27 log.go:172] (0xc00070dc20) (1) Data frame handling\nI0513 21:11:03.507380 27 log.go:172] (0xc00070dc20) (1) Data frame sent\nI0513 21:11:03.507390 27 log.go:172] (0xc0008f0c60) (0xc00070dc20) Stream removed, broadcasting: 1\nI0513 21:11:03.507528 27 log.go:172] (0xc0008f0c60) Go away received\nI0513 21:11:03.507654 27 log.go:172] (0xc0008f0c60) (0xc00070dc20) Stream removed, broadcasting: 1\nI0513 21:11:03.507671 27 log.go:172] (0xc0008f0c60) (0xc00070dcc0) Stream removed, broadcasting: 3\nI0513 21:11:03.507677 27 log.go:172] (0xc0008f0c60) (0xc00070dd60) Stream removed, broadcasting: 5\n" May 13 21:11:03.513: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:11:03.513: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:11:03.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 13 21:11:13.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 21:11:13.521: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:11:13.534: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:13.534: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:13.534: INFO: May 13 21:11:13.534: INFO: StatefulSet ss has not reached scale 3, at 1 May 13 21:11:14.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996094437s May 13 21:11:15.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.942340935s May 13 21:11:16.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.889162255s May 13 21:11:17.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.86362132s May 13 21:11:18.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.763082296s May 13 21:11:19.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.758519959s May 13 21:11:20.781: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.753828786s May 13 21:11:21.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.748390269s May 13 21:11:22.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 743.802611ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3488 May 13 21:11:23.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:11:24.051: INFO: stderr: "I0513 21:11:23.956917 59 log.go:172] (0xc0004dc2c0) (0xc000675d60) Create stream\nI0513 21:11:23.956962 59 log.go:172] (0xc0004dc2c0) (0xc000675d60) Stream added, broadcasting: 1\nI0513 21:11:23.959457 59 log.go:172] (0xc0004dc2c0) Reply frame received for 1\nI0513 21:11:23.959488 59 log.go:172] (0xc0004dc2c0) (0xc0005bc6e0) Create stream\nI0513 21:11:23.959499 59 log.go:172] (0xc0004dc2c0) (0xc0005bc6e0) Stream added, broadcasting: 3\nI0513 21:11:23.960128 59 log.go:172] (0xc0004dc2c0) Reply frame received for 3\nI0513 21:11:23.960147 59 log.go:172] (0xc0004dc2c0) (0xc000675e00) Create stream\nI0513 21:11:23.960153 59 log.go:172] (0xc0004dc2c0) (0xc000675e00) Stream added, broadcasting: 5\nI0513 21:11:23.960847 59 log.go:172] (0xc0004dc2c0) Reply frame received for 5\nI0513 21:11:24.025616 59 log.go:172] (0xc0004dc2c0) Data frame received for 5\nI0513 21:11:24.025637 59 log.go:172] (0xc000675e00) (5) Data frame handling\nI0513 21:11:24.025654 59 log.go:172] (0xc000675e00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:11:24.044565 59 log.go:172] (0xc0004dc2c0) Data frame received for 3\nI0513 21:11:24.044597 59 log.go:172] (0xc0005bc6e0) (3) Data frame handling\nI0513 21:11:24.044610 59 log.go:172] (0xc0005bc6e0) (3) Data frame sent\nI0513 21:11:24.044619 59 log.go:172] (0xc0004dc2c0) Data frame received for 3\nI0513 21:11:24.044647 59 log.go:172] (0xc0004dc2c0) Data frame received for 5\nI0513 21:11:24.044705 59 log.go:172] (0xc000675e00) (5) Data frame handling\nI0513 21:11:24.044745 59 log.go:172] (0xc0005bc6e0) (3) Data frame handling\nI0513 21:11:24.046248 59 log.go:172] (0xc0004dc2c0) Data frame received for 1\nI0513 21:11:24.046276 59 log.go:172] (0xc000675d60) (1) Data frame handling\nI0513 21:11:24.046290 59 log.go:172] (0xc000675d60) (1) Data frame sent\nI0513 21:11:24.046301 59 log.go:172] (0xc0004dc2c0) (0xc000675d60) Stream removed, broadcasting: 1\nI0513 21:11:24.046335 59 log.go:172] (0xc0004dc2c0) Go away received\nI0513 21:11:24.046596 59 log.go:172] (0xc0004dc2c0) (0xc000675d60) Stream removed, broadcasting: 1\nI0513 21:11:24.046614 59 log.go:172] (0xc0004dc2c0) (0xc0005bc6e0) Stream removed, broadcasting: 3\nI0513 21:11:24.046633 59 log.go:172] (0xc0004dc2c0) (0xc000675e00) Stream removed, broadcasting: 5\n" May 13 21:11:24.051: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:11:24.051: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:11:24.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:11:24.253: INFO: stderr: "I0513 21:11:24.187640 76 log.go:172] (0xc0005302c0) (0xc0006eb4a0) Create stream\nI0513 21:11:24.187712 76 log.go:172] (0xc0005302c0) (0xc0006eb4a0) Stream added, broadcasting: 1\nI0513 21:11:24.190421 76 log.go:172] (0xc0005302c0) Reply frame received for 1\nI0513 21:11:24.190465 76 log.go:172] (0xc0005302c0) (0xc00096a000) Create stream\nI0513 21:11:24.190479 76 log.go:172] (0xc0005302c0) (0xc00096a000) Stream added, broadcasting: 3\nI0513 21:11:24.191319 76 log.go:172] (0xc0005302c0) Reply frame received for 3\nI0513 21:11:24.191355 76 log.go:172] (0xc0005302c0) (0xc0006e7a40) Create stream\nI0513 21:11:24.191366 76 log.go:172] (0xc0005302c0) (0xc0006e7a40) Stream added, broadcasting: 5\nI0513 21:11:24.192075 76 log.go:172] (0xc0005302c0) Reply frame received for 5\nI0513 21:11:24.244060 76 log.go:172] (0xc0005302c0) Data frame received for 3\nI0513 21:11:24.244091 76 log.go:172] (0xc00096a000) (3) Data frame handling\nI0513 21:11:24.244120 76 log.go:172] (0xc00096a000) (3) Data frame sent\nI0513 21:11:24.244135 76 log.go:172] (0xc0005302c0) Data frame received for 3\nI0513 21:11:24.244146 76 log.go:172] (0xc00096a000) (3) Data frame handling\nI0513 21:11:24.244198 76 log.go:172] (0xc0005302c0) Data frame received for 5\nI0513 21:11:24.244226 76 log.go:172] (0xc0006e7a40) (5) Data frame handling\nI0513 21:11:24.244249 76 log.go:172] (0xc0006e7a40) (5) Data frame sent\nI0513 21:11:24.244263 76 log.go:172] (0xc0005302c0) Data frame received for 5\nI0513 21:11:24.244270 76 log.go:172] (0xc0006e7a40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0513 21:11:24.246416 76 log.go:172] (0xc0005302c0) Data frame received for 1\nI0513 21:11:24.246450 76 log.go:172] (0xc0006eb4a0) (1) Data frame handling\nI0513 21:11:24.246481 76 log.go:172] (0xc0006eb4a0) (1) Data frame sent\nI0513 21:11:24.246575 76 log.go:172] (0xc0005302c0) (0xc0006eb4a0) Stream removed, broadcasting: 1\nI0513 21:11:24.246606 76 log.go:172] (0xc0005302c0) Go away received\nI0513 21:11:24.247081 76 log.go:172] (0xc0005302c0) (0xc0006eb4a0) Stream removed, broadcasting: 1\nI0513 21:11:24.247107 76 log.go:172] (0xc0005302c0) (0xc00096a000) Stream removed, broadcasting: 3\nI0513 21:11:24.247127 76 log.go:172] (0xc0005302c0) (0xc0006e7a40) Stream removed, broadcasting: 5\n" May 13 21:11:24.253: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:11:24.253: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:11:24.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:11:24.454: INFO: stderr: "I0513 21:11:24.374362 97 log.go:172] (0xc000b14000) (0xc000902000) Create stream\nI0513 21:11:24.374450 97 log.go:172] (0xc000b14000) (0xc000902000) Stream added, broadcasting: 1\nI0513 21:11:24.377378 97 log.go:172] (0xc000b14000) Reply frame received for 1\nI0513 21:11:24.377463 97 log.go:172] (0xc000b14000) (0xc0009020a0) Create stream\nI0513 21:11:24.377479 97 log.go:172] (0xc000b14000) (0xc0009020a0) Stream added, broadcasting: 3\nI0513 21:11:24.378434 97 log.go:172] (0xc000b14000) Reply frame received for 3\nI0513 21:11:24.378454 97 log.go:172] (0xc000b14000) (0xc000902140) Create stream\nI0513 21:11:24.378461 97 log.go:172] (0xc000b14000) (0xc000902140) Stream added, broadcasting: 5\nI0513 21:11:24.379429 97 log.go:172] (0xc000b14000) Reply frame received for 5\nI0513 21:11:24.445930 97 log.go:172] (0xc000b14000) Data frame received for 3\nI0513 21:11:24.446109 97 log.go:172] (0xc0009020a0) (3) Data frame handling\nI0513 21:11:24.446228 97 log.go:172] (0xc0009020a0) (3) Data frame sent\nI0513 21:11:24.446271 97 log.go:172] (0xc000b14000) Data frame received for 3\nI0513 21:11:24.446297 97 log.go:172] (0xc0009020a0) (3) Data frame handling\nI0513 21:11:24.446318 97 log.go:172] (0xc000b14000) Data frame received for 5\nI0513 21:11:24.446333 97 log.go:172] (0xc000902140) (5) Data frame handling\nI0513 21:11:24.446349 97 log.go:172] (0xc000902140) (5) Data frame sent\nI0513 21:11:24.446365 97 log.go:172] (0xc000b14000) Data frame received for 5\nI0513 21:11:24.446379 97 log.go:172] (0xc000902140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0513 21:11:24.447836 97 log.go:172] (0xc000b14000) Data frame received for 1\nI0513 21:11:24.447866 97 log.go:172] (0xc000902000) (1) Data frame handling\nI0513 21:11:24.447881 97 log.go:172] (0xc000902000) (1) Data frame sent\nI0513 21:11:24.447910 97 log.go:172] (0xc000b14000) (0xc000902000) Stream removed, broadcasting: 1\nI0513 21:11:24.447941 97 log.go:172] (0xc000b14000) Go away received\nI0513 21:11:24.448281 97 log.go:172] (0xc000b14000) (0xc000902000) Stream removed, broadcasting: 1\nI0513 21:11:24.448301 97 log.go:172] (0xc000b14000) (0xc0009020a0) Stream removed, broadcasting: 3\nI0513 21:11:24.448313 97 log.go:172] (0xc000b14000) (0xc000902140) Stream removed, broadcasting: 5\n" May 13 21:11:24.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:11:24.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:11:24.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 13 21:11:34.463: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:11:34.463: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:11:34.463: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 13 21:11:34.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:11:34.682: INFO: stderr: "I0513 21:11:34.591711 117 log.go:172] (0xc0009c66e0) (0xc000691f40) Create stream\nI0513 21:11:34.591760 117 log.go:172] (0xc0009c66e0) (0xc000691f40) Stream added, broadcasting: 1\nI0513 21:11:34.593957 117 log.go:172] (0xc0009c66e0) Reply frame received for 1\nI0513 21:11:34.593998 117 log.go:172] (0xc0009c66e0) (0xc000638820) Create stream\nI0513 21:11:34.594010 117 log.go:172] (0xc0009c66e0) (0xc000638820) Stream added, broadcasting: 3\nI0513 21:11:34.594763 117 log.go:172] (0xc0009c66e0) Reply frame received for 3\nI0513 21:11:34.594791 117 log.go:172] (0xc0009c66e0) (0xc0002415e0) Create stream\nI0513 21:11:34.594803 117 log.go:172] (0xc0009c66e0) (0xc0002415e0) Stream added, broadcasting: 5\nI0513 21:11:34.595640 117 log.go:172] (0xc0009c66e0) Reply frame received for 5\nI0513 21:11:34.676407 117 log.go:172] (0xc0009c66e0) Data frame received for 5\nI0513 21:11:34.676432 117 log.go:172] (0xc0002415e0) (5) Data frame handling\nI0513 21:11:34.676448 117 log.go:172] (0xc0002415e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:11:34.676473 117 log.go:172] (0xc0009c66e0) Data frame received for 3\nI0513 21:11:34.676484 117 log.go:172] (0xc000638820) (3) Data frame handling\nI0513 21:11:34.676495 117 log.go:172] (0xc000638820) (3) Data frame sent\nI0513 21:11:34.676503 117 log.go:172] (0xc0009c66e0) Data frame received for 3\nI0513 21:11:34.676511 117 log.go:172] (0xc000638820) (3) Data frame handling\nI0513 21:11:34.676734 117 log.go:172] (0xc0009c66e0) Data frame received for 5\nI0513 21:11:34.676763 117 log.go:172] (0xc0002415e0) (5) Data frame handling\nI0513 21:11:34.678298 117 log.go:172] (0xc0009c66e0) Data frame received for 1\nI0513 21:11:34.678320 117 log.go:172] (0xc000691f40) (1) Data frame handling\nI0513 21:11:34.678337 117 log.go:172] (0xc000691f40) (1) Data frame sent\nI0513 21:11:34.678372 117 log.go:172] (0xc0009c66e0) (0xc000691f40) Stream removed, broadcasting: 1\nI0513 21:11:34.678396 117 log.go:172] (0xc0009c66e0) Go away received\nI0513 21:11:34.678645 117 log.go:172] (0xc0009c66e0) (0xc000691f40) Stream removed, broadcasting: 1\nI0513 21:11:34.678660 117 log.go:172] (0xc0009c66e0) (0xc000638820) Stream removed, broadcasting: 3\nI0513 21:11:34.678674 117 log.go:172] (0xc0009c66e0) (0xc0002415e0) Stream removed, broadcasting: 5\n" May 13 21:11:34.682: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:11:34.682: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:11:34.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:11:34.947: INFO: stderr: "I0513 21:11:34.814529 139 log.go:172] (0xc000948630) (0xc0006f5f40) Create stream\nI0513 21:11:34.814569 139 log.go:172] (0xc000948630) (0xc0006f5f40) Stream added, broadcasting: 1\nI0513 21:11:34.816858 139 log.go:172] (0xc000948630) Reply frame received for 1\nI0513 21:11:34.816889 139 log.go:172] (0xc000948630) (0xc000624820) Create stream\nI0513 21:11:34.816899 139 log.go:172] (0xc000948630) (0xc000624820) Stream added, broadcasting: 3\nI0513 21:11:34.817723 139 log.go:172] (0xc000948630) Reply frame received for 3\nI0513 21:11:34.817755 139 log.go:172] (0xc000948630) (0xc000a58000) Create stream\nI0513 21:11:34.817768 139 log.go:172] (0xc000948630) (0xc000a58000) Stream added, broadcasting: 5\nI0513 21:11:34.818494 139 log.go:172] (0xc000948630) Reply frame received for 5\nI0513 21:11:34.916219 139 log.go:172] (0xc000948630) Data frame received for 5\nI0513 21:11:34.916244 139 log.go:172] (0xc000a58000) (5) Data frame handling\nI0513 21:11:34.916262 139 log.go:172] (0xc000a58000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:11:34.941752 139 log.go:172] (0xc000948630) Data frame received for 5\nI0513 21:11:34.941789 139 log.go:172] (0xc000948630) Data frame received for 3\nI0513 21:11:34.941844 139 log.go:172] (0xc000624820) (3) Data frame handling\nI0513 21:11:34.941869 139 log.go:172] (0xc000624820) (3) Data frame sent\nI0513 21:11:34.941882 139 log.go:172] (0xc000948630) Data frame received for 3\nI0513 21:11:34.941892 139 log.go:172] (0xc000624820) (3) Data frame handling\nI0513 21:11:34.941907 139 log.go:172] (0xc000a58000) (5) Data frame handling\nI0513 21:11:34.943127 139 log.go:172] (0xc000948630) Data frame received for 1\nI0513 21:11:34.943157 139 log.go:172] (0xc0006f5f40) (1) Data frame handling\nI0513 21:11:34.943183 139 log.go:172] (0xc0006f5f40) (1) Data frame sent\nI0513 21:11:34.943224 139 log.go:172] (0xc000948630) (0xc0006f5f40) Stream removed, broadcasting: 1\nI0513 21:11:34.943261 139 log.go:172] (0xc000948630) Go away received\nI0513 21:11:34.943468 139 log.go:172] (0xc000948630) (0xc0006f5f40) Stream removed, broadcasting: 1\nI0513 21:11:34.943482 139 log.go:172] (0xc000948630) (0xc000624820) Stream removed, broadcasting: 3\nI0513 21:11:34.943489 139 log.go:172] (0xc000948630) (0xc000a58000) Stream removed, broadcasting: 5\n" May 13 21:11:34.947: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:11:34.947: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:11:34.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:11:35.170: INFO: stderr: "I0513 21:11:35.074313 159 log.go:172] (0xc000902a50) (0xc0005dfae0) Create stream\nI0513 21:11:35.074374 159 log.go:172] (0xc000902a50) (0xc0005dfae0) Stream added, broadcasting: 1\nI0513 21:11:35.076681 159 log.go:172] (0xc000902a50) Reply frame received for 1\nI0513 21:11:35.076746 159 log.go:172] (0xc000902a50) (0xc000950000) Create stream\nI0513 21:11:35.076774 159 log.go:172] (0xc000902a50) (0xc000950000) Stream added, broadcasting: 3\nI0513 21:11:35.077888 159 log.go:172] (0xc000902a50) Reply frame received for 3\nI0513 21:11:35.077934 159 log.go:172] (0xc000902a50) (0xc000950140) Create stream\nI0513 21:11:35.077953 159 log.go:172] (0xc000902a50) (0xc000950140) Stream added, broadcasting: 5\nI0513 21:11:35.078874 159 log.go:172] (0xc000902a50) Reply frame received for 5\nI0513 21:11:35.127595 159 log.go:172] (0xc000902a50) Data frame received for 5\nI0513 21:11:35.127622 159 log.go:172] (0xc000950140) (5) Data frame handling\nI0513 21:11:35.127639 159 log.go:172] (0xc000950140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:11:35.163294 159 log.go:172] (0xc000902a50) Data frame received for 3\nI0513 21:11:35.163322 159 log.go:172] (0xc000950000) (3) Data frame handling\nI0513 21:11:35.163374 159 log.go:172] (0xc000950000) (3) Data frame sent\nI0513 21:11:35.163634 159 log.go:172] (0xc000902a50) Data frame received for 3\nI0513 21:11:35.163655 159 log.go:172] (0xc000950000) (3) Data frame handling\nI0513 21:11:35.163680 159 log.go:172] (0xc000902a50) Data frame received for 5\nI0513 21:11:35.163701 159 log.go:172] (0xc000950140) (5) Data frame handling\nI0513 21:11:35.164944 159 log.go:172] (0xc000902a50) Data frame received for 1\nI0513 21:11:35.164974 159 log.go:172] (0xc0005dfae0) (1) Data frame handling\nI0513 21:11:35.165002 159 log.go:172] (0xc0005dfae0) (1) Data frame sent\nI0513 21:11:35.165032 159 log.go:172] (0xc000902a50) (0xc0005dfae0) Stream removed, broadcasting: 1\nI0513 21:11:35.165092 159 log.go:172] (0xc000902a50) Go away received\nI0513 21:11:35.165621 159 log.go:172] (0xc000902a50) (0xc0005dfae0) Stream removed, broadcasting: 1\nI0513 21:11:35.165642 159 log.go:172] (0xc000902a50) (0xc000950000) Stream removed, broadcasting: 3\nI0513 21:11:35.165654 159 log.go:172] (0xc000902a50) (0xc000950140) Stream removed, broadcasting: 5\n" May 13 21:11:35.170: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:11:35.170: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:11:35.170: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:11:35.199: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 13 21:11:45.206: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 21:11:45.206: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 13 21:11:45.206: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 13 21:11:45.258: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:45.258: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:45.258: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:45.258: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:45.258: INFO: May 13 21:11:45.258: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:46.372: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:46.372: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:46.372: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:46.372: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:46.372: INFO: May 13 21:11:46.372: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:47.432: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:47.432: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:47.432: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:47.432: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:47.432: INFO: May 13 21:11:47.432: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:48.442: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:48.442: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:48.442: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:48.442: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:48.442: INFO: May 13 21:11:48.442: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:49.447: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:49.447: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:49.447: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:49.447: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:49.447: INFO: May 13 21:11:49.447: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:50.452: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:50.452: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:50.452: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:50.452: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:50.452: INFO: May 13 21:11:50.452: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:51.457: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:51.457: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:51.458: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:51.458: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:51.458: INFO: May 13 21:11:51.458: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:52.463: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:52.463: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:52.463: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:52.463: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:52.463: INFO: May 13 21:11:52.463: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:53.467: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:53.467: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:53.468: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:53.468: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:53.468: INFO: May 13 21:11:53.468: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 21:11:54.472: INFO: POD NODE PHASE GRACE CONDITIONS May 13 21:11:54.472: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:10:50 +0000 UTC }] May 13 21:11:54.472: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:54.472: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-13 21:11:13 +0000 UTC }] May 13 21:11:54.472: INFO: May 13 21:11:54.472: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3488 May 13 21:11:55.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:11:55.620: INFO: rc: 1 May 13 21:11:55.620: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 13 21:12:05.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:05.722: INFO: rc: 1 May 13 21:12:05.722: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:12:15.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:15.906: INFO: rc: 1 May 13 21:12:15.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:12:25.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:26.035: INFO: rc: 1 May 13 21:12:26.035: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:12:36.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:36.190: INFO: rc: 1 May 13 21:12:36.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:12:46.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:46.286: INFO: rc: 1 May 13 21:12:46.286: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:12:56.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:12:56.392: INFO: rc: 1 May 13 21:12:56.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:06.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:06.490: INFO: rc: 1 May 13 21:13:06.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:16.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:16.603: INFO: rc: 1 May 13 21:13:16.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:26.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:26.776: INFO: rc: 1 May 13 21:13:26.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:36.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:36.879: INFO: rc: 1 May 13 21:13:36.879: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:46.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:46.980: INFO: rc: 1 May 13 21:13:46.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:13:56.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:13:57.083: INFO: rc: 1 May 13 21:13:57.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:07.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:07.182: INFO: rc: 1 May 13 21:14:07.182: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:17.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:17.289: INFO: rc: 1 May 13 21:14:17.289: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:27.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:27.388: INFO: rc: 1 May 13 21:14:27.388: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:37.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:37.528: INFO: rc: 1 May 13 21:14:37.528: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:47.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:47.758: INFO: rc: 1 May 13 21:14:47.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:14:57.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:14:57.859: INFO: rc: 1 May 13 21:14:57.859: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:07.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:07.962: INFO: rc: 1 May 13 21:15:07.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:17.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:18.064: INFO: rc: 1 May 13 21:15:18.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:28.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:28.168: INFO: rc: 1 May 13 21:15:28.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:38.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:38.274: INFO: rc: 1 May 13 21:15:38.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:48.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:48.371: INFO: rc: 1 May 13 21:15:48.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:15:58.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:15:58.471: INFO: rc: 1 May 13 21:15:58.471: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:08.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:08.569: INFO: rc: 1 May 13 21:16:08.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:18.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:18.671: INFO: rc: 1 May 13 21:16:18.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:28.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:28.779: INFO: rc: 1 May 13 21:16:28.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:38.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:38.878: INFO: rc: 1 May 13 21:16:38.878: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:48.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:48.983: INFO: rc: 1 May 13 21:16:48.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 13 21:16:58.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3488 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:16:59.071: INFO: rc: 1 May 13 21:16:59.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 13 21:16:59.071: INFO: Scaling statefulset ss to 0 May 13 21:16:59.090: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 21:16:59.092: INFO: Deleting all statefulset in ns statefulset-3488 May 13 21:16:59.095: INFO: Scaling statefulset ss to 0 May 13 21:16:59.101: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:16:59.103: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:16:59.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3488" for this suite. • [SLOW TEST:368.672 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":3,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:16:59.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:15.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9873" for this suite. • [SLOW TEST:16.742 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":4,"skipped":101,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:15.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 13 21:17:15.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6730 -- logs-generator --log-lines-total 100 --run-duration 20s' May 13 21:17:16.039: INFO: stderr: "" May 13 21:17:16.039: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 13 21:17:16.039: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 13 21:17:16.039: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6730" to be "running and ready, or succeeded" May 13 21:17:16.042: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348412ms May 13 21:17:18.046: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007271654s May 13 21:17:20.050: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.011231472s May 13 21:17:20.051: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 13 21:17:20.051: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 13 21:17:20.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730' May 13 21:17:20.188: INFO: stderr: "" May 13 21:17:20.188: INFO: stdout: "I0513 21:17:18.422209 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/tsrq 365\nI0513 21:17:18.622350 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/trfh 286\nI0513 21:17:18.822362 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/62d 547\nI0513 21:17:19.022370 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/5rt 234\nI0513 21:17:19.222366 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/pcsr 522\nI0513 21:17:19.422333 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/r84 498\nI0513 21:17:19.622350 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/7xgj 260\nI0513 21:17:19.822372 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/5pzs 299\nI0513 21:17:20.022495 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/phr 328\n" STEP: limiting log lines May 13 21:17:20.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730 --tail=1' May 13 21:17:20.300: INFO: stderr: "" May 13 21:17:20.300: INFO: stdout: "I0513 21:17:20.222345 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bxc 332\n" May 13 21:17:20.300: INFO: got output "I0513 21:17:20.222345 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bxc 332\n" STEP: limiting log bytes May 13 21:17:20.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730 --limit-bytes=1' May 13 21:17:20.440: INFO: stderr: "" May 13 21:17:20.440: INFO: stdout: "I" May 13 21:17:20.440: INFO: got output "I" STEP: exposing timestamps May 13 21:17:20.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730 --tail=1 --timestamps' May 13 21:17:20.558: INFO: stderr: "" May 13 21:17:20.558: INFO: stdout: "2020-05-13T21:17:20.422552274Z I0513 21:17:20.422361 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/62w5 213\n" May 13 21:17:20.558: INFO: got output "2020-05-13T21:17:20.422552274Z I0513 21:17:20.422361 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/62w5 213\n" STEP: restricting to a time range May 13 21:17:23.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730 --since=1s' May 13 21:17:23.181: INFO: stderr: "" May 13 21:17:23.182: INFO: stdout: "I0513 21:17:22.222335 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/x6x 568\nI0513 21:17:22.422342 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/8nx 357\nI0513 21:17:22.622320 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/66d 390\nI0513 21:17:22.822304 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/vlnw 354\nI0513 21:17:23.022352 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/5lq 408\n" May 13 21:17:23.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6730 --since=24h' May 13 21:17:23.289: INFO: stderr: "" May 13 21:17:23.289: INFO: stdout: "I0513 21:17:18.422209 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/tsrq 365\nI0513 21:17:18.622350 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/trfh 286\nI0513 21:17:18.822362 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/62d 547\nI0513 21:17:19.022370 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/5rt 234\nI0513 21:17:19.222366 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/pcsr 522\nI0513 21:17:19.422333 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/r84 498\nI0513 21:17:19.622350 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/7xgj 260\nI0513 21:17:19.822372 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/5pzs 299\nI0513 21:17:20.022495 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/phr 328\nI0513 21:17:20.222345 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bxc 332\nI0513 21:17:20.422361 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/62w5 213\nI0513 21:17:20.622385 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/wj7 427\nI0513 21:17:20.822345 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/tzhq 246\nI0513 21:17:21.022336 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/8xjw 234\nI0513 21:17:21.222340 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/7dfc 487\nI0513 21:17:21.422368 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/wkn8 377\nI0513 21:17:21.622361 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/t2rv 441\nI0513 21:17:21.822353 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/kj2n 534\nI0513 21:17:22.022410 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/pkp 241\nI0513 21:17:22.222335 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/x6x 568\nI0513 21:17:22.422342 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/8nx 357\nI0513 21:17:22.622320 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/66d 390\nI0513 21:17:22.822304 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/vlnw 354\nI0513 21:17:23.022352 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/5lq 408\nI0513 21:17:23.222356 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/rxcs 300\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 13 21:17:23.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6730' May 13 21:17:25.958: INFO: stderr: "" May 13 21:17:25.958: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:25.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6730" for this suite. • [SLOW TEST:10.102 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":5,"skipped":105,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:25.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-ca9c7bef-e9ca-4a99-a6cf-9521d6e136f4 STEP: Creating a pod to test consume secrets May 13 21:17:26.077: INFO: Waiting up to 5m0s for pod "pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5" in namespace "secrets-9162" to be "success or failure" May 13 21:17:26.080: INFO: Pod "pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711656ms May 13 21:17:28.107: INFO: Pod "pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029482342s May 13 21:17:30.111: INFO: Pod "pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033749232s STEP: Saw pod success May 13 21:17:30.111: INFO: Pod "pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5" satisfied condition "success or failure" May 13 21:17:30.114: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5 container secret-volume-test: STEP: delete the pod May 13 21:17:30.135: INFO: Waiting for pod pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5 to disappear May 13 21:17:30.140: INFO: Pod pod-secrets-1623582c-d014-4a0a-b2fb-a8e2ed0172d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:30.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9162" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":117,"failed":0} SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:30.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e1db34f8-ff7e-478f-83d7-3c96406904b1 STEP: Creating a pod to test consume secrets May 13 21:17:30.674: INFO: Waiting up to 5m0s for pod "pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52" in namespace "secrets-5518" to be "success or failure" May 13 21:17:30.679: INFO: Pod "pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.645781ms May 13 21:17:32.790: INFO: Pod "pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11683281s May 13 21:17:34.794: INFO: Pod "pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120704045s STEP: Saw pod success May 13 21:17:34.794: INFO: Pod "pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52" satisfied condition "success or failure" May 13 21:17:34.798: INFO: Trying to get logs from node jerma-worker pod pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52 container secret-volume-test: STEP: delete the pod May 13 21:17:35.342: INFO: Waiting for pod pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52 to disappear May 13 21:17:35.351: INFO: Pod pod-secrets-73df68c3-ca28-4b4a-b46a-d7126c2f4f52 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:35.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5518" for this suite. STEP: Destroying namespace "secret-namespace-7472" for this suite. • [SLOW TEST:5.254 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":119,"failed":0} S ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:35.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-4175 STEP: creating replication controller nodeport-test in namespace services-4175 I0513 21:17:35.642246 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4175, replica count: 2 I0513 21:17:38.692621 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:17:41.692854 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 21:17:41.692: INFO: Creating new exec pod May 13 21:17:46.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4175 execpod98d4m -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 13 21:17:47.006: INFO: stderr: "I0513 21:17:46.861988 933 log.go:172] (0xc000a87130) (0xc0008d2320) Create stream\nI0513 21:17:46.862084 933 log.go:172] (0xc000a87130) (0xc0008d2320) Stream added, broadcasting: 1\nI0513 21:17:46.866328 933 log.go:172] (0xc000a87130) Reply frame received for 1\nI0513 21:17:46.866378 933 log.go:172] (0xc000a87130) (0xc0006f8500) Create stream\nI0513 21:17:46.866397 933 log.go:172] (0xc000a87130) (0xc0006f8500) Stream added, broadcasting: 3\nI0513 21:17:46.867261 933 log.go:172] (0xc000a87130) Reply frame received for 3\nI0513 21:17:46.867311 933 log.go:172] (0xc000a87130) (0xc0005c92c0) Create stream\nI0513 21:17:46.867328 933 log.go:172] (0xc000a87130) (0xc0005c92c0) Stream added, broadcasting: 5\nI0513 21:17:46.868187 933 log.go:172] (0xc000a87130) Reply frame received for 5\nI0513 21:17:46.972112 933 log.go:172] (0xc000a87130) Data frame received for 5\nI0513 21:17:46.972154 933 log.go:172] (0xc0005c92c0) (5) Data frame handling\nI0513 21:17:46.972174 933 log.go:172] (0xc0005c92c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0513 21:17:46.997458 933 log.go:172] (0xc000a87130) Data frame received for 5\nI0513 21:17:46.997515 933 log.go:172] (0xc0005c92c0) (5) Data frame handling\nI0513 21:17:46.997551 933 log.go:172] (0xc0005c92c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0513 21:17:46.998540 933 log.go:172] (0xc000a87130) Data frame received for 5\nI0513 21:17:46.998563 933 log.go:172] (0xc0005c92c0) (5) Data frame handling\nI0513 21:17:46.998589 933 log.go:172] (0xc000a87130) Data frame received for 3\nI0513 21:17:46.998613 933 log.go:172] (0xc0006f8500) (3) Data frame handling\nI0513 21:17:47.000460 933 log.go:172] (0xc000a87130) Data frame received for 1\nI0513 21:17:47.000499 933 log.go:172] (0xc0008d2320) (1) Data frame handling\nI0513 21:17:47.000540 933 log.go:172] (0xc0008d2320) (1) Data frame sent\nI0513 21:17:47.000567 933 log.go:172] (0xc000a87130) (0xc0008d2320) Stream removed, broadcasting: 1\nI0513 21:17:47.000602 933 log.go:172] (0xc000a87130) Go away received\nI0513 21:17:47.001463 933 log.go:172] (0xc000a87130) (0xc0008d2320) Stream removed, broadcasting: 1\nI0513 21:17:47.001503 933 log.go:172] (0xc000a87130) (0xc0006f8500) Stream removed, broadcasting: 3\nI0513 21:17:47.001527 933 log.go:172] (0xc000a87130) (0xc0005c92c0) Stream removed, broadcasting: 5\n" May 13 21:17:47.006: INFO: stdout: "" May 13 21:17:47.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4175 execpod98d4m -- /bin/sh -x -c nc -zv -t -w 2 10.100.127.202 80' May 13 21:17:47.225: INFO: stderr: "I0513 21:17:47.138837 950 log.go:172] (0xc000a08210) (0xc0005ec640) Create stream\nI0513 21:17:47.138912 950 log.go:172] (0xc000a08210) (0xc0005ec640) Stream added, broadcasting: 1\nI0513 21:17:47.140743 950 log.go:172] (0xc000a08210) Reply frame received for 1\nI0513 21:17:47.140812 950 log.go:172] (0xc000a08210) (0xc000908000) Create stream\nI0513 21:17:47.140831 950 log.go:172] (0xc000a08210) (0xc000908000) Stream added, broadcasting: 3\nI0513 21:17:47.141969 950 log.go:172] (0xc000a08210) Reply frame received for 3\nI0513 21:17:47.142000 950 log.go:172] (0xc000a08210) (0xc0005ec6e0) Create stream\nI0513 21:17:47.142007 950 log.go:172] (0xc000a08210) (0xc0005ec6e0) Stream added, broadcasting: 5\nI0513 21:17:47.142981 950 log.go:172] (0xc000a08210) Reply frame received for 5\nI0513 21:17:47.212702 950 log.go:172] (0xc000a08210) Data frame received for 5\nI0513 21:17:47.212737 950 log.go:172] (0xc0005ec6e0) (5) Data frame handling\nI0513 21:17:47.212752 950 log.go:172] (0xc0005ec6e0) (5) Data frame sent\nI0513 21:17:47.212759 950 log.go:172] (0xc000a08210) Data frame received for 5\nI0513 21:17:47.212764 950 log.go:172] (0xc0005ec6e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.127.202 80\nConnection to 10.100.127.202 80 port [tcp/http] succeeded!\nI0513 21:17:47.212779 950 log.go:172] (0xc0005ec6e0) (5) Data frame sent\nI0513 21:17:47.213514 950 log.go:172] (0xc000a08210) Data frame received for 3\nI0513 21:17:47.213550 950 log.go:172] (0xc000908000) (3) Data frame handling\nI0513 21:17:47.217949 950 log.go:172] (0xc000a08210) Data frame received for 1\nI0513 21:17:47.217979 950 log.go:172] (0xc0005ec640) (1) Data frame handling\nI0513 21:17:47.217997 950 log.go:172] (0xc0005ec640) (1) Data frame sent\nI0513 21:17:47.218016 950 log.go:172] (0xc000a08210) (0xc0005ec640) Stream removed, broadcasting: 1\nI0513 21:17:47.220773 950 log.go:172] (0xc000a08210) Data frame received for 5\nI0513 21:17:47.220802 950 log.go:172] (0xc0005ec6e0) (5) Data frame handling\nI0513 21:17:47.220828 950 log.go:172] (0xc000a08210) Go away received\nI0513 21:17:47.221089 950 log.go:172] (0xc000a08210) (0xc0005ec640) Stream removed, broadcasting: 1\nI0513 21:17:47.221241 950 log.go:172] (0xc000a08210) (0xc000908000) Stream removed, broadcasting: 3\nI0513 21:17:47.221279 950 log.go:172] (0xc000a08210) (0xc0005ec6e0) Stream removed, broadcasting: 5\n" May 13 21:17:47.225: INFO: stdout: "" May 13 21:17:47.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4175 execpod98d4m -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32435' May 13 21:17:47.499: INFO: stderr: "I0513 21:17:47.351522 969 log.go:172] (0xc0009980b0) (0xc0007a6140) Create stream\nI0513 21:17:47.351577 969 log.go:172] (0xc0009980b0) (0xc0007a6140) Stream added, broadcasting: 1\nI0513 21:17:47.353841 969 log.go:172] (0xc0009980b0) Reply frame received for 1\nI0513 21:17:47.353873 969 log.go:172] (0xc0009980b0) (0xc0005e9c20) Create stream\nI0513 21:17:47.353882 969 log.go:172] (0xc0009980b0) (0xc0005e9c20) Stream added, broadcasting: 3\nI0513 21:17:47.354706 969 log.go:172] (0xc0009980b0) Reply frame received for 3\nI0513 21:17:47.354757 969 log.go:172] (0xc0009980b0) (0xc0007a61e0) Create stream\nI0513 21:17:47.354773 969 log.go:172] (0xc0009980b0) (0xc0007a61e0) Stream added, broadcasting: 5\nI0513 21:17:47.355502 969 log.go:172] (0xc0009980b0) Reply frame received for 5\nI0513 21:17:47.491999 969 log.go:172] (0xc0009980b0) Data frame received for 3\nI0513 21:17:47.492039 969 log.go:172] (0xc0005e9c20) (3) Data frame handling\nI0513 21:17:47.492068 969 log.go:172] (0xc0009980b0) Data frame received for 5\nI0513 21:17:47.492084 969 log.go:172] (0xc0007a61e0) (5) Data frame handling\nI0513 21:17:47.492097 969 log.go:172] (0xc0007a61e0) (5) Data frame sent\nI0513 21:17:47.492110 969 log.go:172] (0xc0009980b0) Data frame received for 5\nI0513 21:17:47.492135 969 log.go:172] (0xc0007a61e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32435\nConnection to 172.17.0.10 32435 port [tcp/32435] succeeded!\nI0513 21:17:47.493810 969 log.go:172] (0xc0009980b0) Data frame received for 1\nI0513 21:17:47.493843 969 log.go:172] (0xc0007a6140) (1) Data frame handling\nI0513 21:17:47.493857 969 log.go:172] (0xc0007a6140) (1) Data frame sent\nI0513 21:17:47.493873 969 log.go:172] (0xc0009980b0) (0xc0007a6140) Stream removed, broadcasting: 1\nI0513 21:17:47.494044 969 log.go:172] (0xc0009980b0) Go away received\nI0513 21:17:47.494354 969 log.go:172] (0xc0009980b0) (0xc0007a6140) Stream removed, broadcasting: 1\nI0513 21:17:47.494370 969 log.go:172] (0xc0009980b0) (0xc0005e9c20) Stream removed, broadcasting: 3\nI0513 21:17:47.494380 969 log.go:172] (0xc0009980b0) (0xc0007a61e0) Stream removed, broadcasting: 5\n" May 13 21:17:47.499: INFO: stdout: "" May 13 21:17:47.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4175 execpod98d4m -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32435' May 13 21:17:47.718: INFO: stderr: "I0513 21:17:47.628292 988 log.go:172] (0xc000918a50) (0xc000701a40) Create stream\nI0513 21:17:47.628348 988 log.go:172] (0xc000918a50) (0xc000701a40) Stream added, broadcasting: 1\nI0513 21:17:47.630944 988 log.go:172] (0xc000918a50) Reply frame received for 1\nI0513 21:17:47.630972 988 log.go:172] (0xc000918a50) (0xc00097a000) Create stream\nI0513 21:17:47.630980 988 log.go:172] (0xc000918a50) (0xc00097a000) Stream added, broadcasting: 3\nI0513 21:17:47.631712 988 log.go:172] (0xc000918a50) Reply frame received for 3\nI0513 21:17:47.631729 988 log.go:172] (0xc000918a50) (0xc000701c20) Create stream\nI0513 21:17:47.631736 988 log.go:172] (0xc000918a50) (0xc000701c20) Stream added, broadcasting: 5\nI0513 21:17:47.632671 988 log.go:172] (0xc000918a50) Reply frame received for 5\nI0513 21:17:47.711060 988 log.go:172] (0xc000918a50) Data frame received for 5\nI0513 21:17:47.711106 988 log.go:172] (0xc000701c20) (5) Data frame handling\nI0513 21:17:47.711134 988 log.go:172] (0xc000701c20) (5) Data frame sent\nI0513 21:17:47.711149 988 log.go:172] (0xc000918a50) Data frame received for 5\nI0513 21:17:47.711159 988 log.go:172] (0xc000701c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32435\nConnection to 172.17.0.8 32435 port [tcp/32435] succeeded!\nI0513 21:17:47.711211 988 log.go:172] (0xc000918a50) Data frame received for 3\nI0513 21:17:47.711227 988 log.go:172] (0xc00097a000) (3) Data frame handling\nI0513 21:17:47.712800 988 log.go:172] (0xc000918a50) Data frame received for 1\nI0513 21:17:47.712822 988 log.go:172] (0xc000701a40) (1) Data frame handling\nI0513 21:17:47.712838 988 log.go:172] (0xc000701a40) (1) Data frame sent\nI0513 21:17:47.712857 988 log.go:172] (0xc000918a50) (0xc000701a40) Stream removed, broadcasting: 1\nI0513 21:17:47.712878 988 log.go:172] (0xc000918a50) Go away received\nI0513 21:17:47.713691 988 log.go:172] (0xc000918a50) (0xc000701a40) Stream removed, broadcasting: 1\nI0513 21:17:47.713725 988 log.go:172] (0xc000918a50) (0xc00097a000) Stream removed, broadcasting: 3\nI0513 21:17:47.713746 988 log.go:172] (0xc000918a50) (0xc000701c20) Stream removed, broadcasting: 5\n" May 13 21:17:47.718: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4175" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.325 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":8,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:47.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:17:47.825: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 13 21:17:47.847: INFO: Pod name sample-pod: Found 0 pods out of 1 May 13 21:17:52.850: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 21:17:52.850: INFO: Creating deployment "test-rolling-update-deployment" May 13 21:17:52.854: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 13 21:17:52.874: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 13 21:17:54.880: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 13 21:17:54.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001472, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001472, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001472, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001472, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:17:56.887: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 13 21:17:56.895: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7250 /apis/apps/v1/namespaces/deployment-7250/deployments/test-rolling-update-deployment 35540dba-dc53-4a0a-b141-f21ddf596359 15938270 1 2020-05-13 21:17:52 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001db53e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-13 21:17:52 +0000 UTC,LastTransitionTime:2020-05-13 21:17:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-13 21:17:56 +0000 UTC,LastTransitionTime:2020-05-13 21:17:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 13 21:17:56.898: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7250 /apis/apps/v1/namespaces/deployment-7250/replicasets/test-rolling-update-deployment-67cf4f6444 f091852b-ca54-489c-a0f7-1766448b1ac4 15938259 1 2020-05-13 21:17:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 35540dba-dc53-4a0a-b141-f21ddf596359 0xc001db5887 0xc001db5888}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001db58f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 21:17:56.898: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 13 21:17:56.899: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7250 /apis/apps/v1/namespaces/deployment-7250/replicasets/test-rolling-update-controller 737e4e86-e0fc-4f97-8703-9f6ff81bdd92 15938269 2 2020-05-13 21:17:47 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 35540dba-dc53-4a0a-b141-f21ddf596359 0xc001db57b7 0xc001db57b8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001db5818 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 21:17:56.902: INFO: Pod "test-rolling-update-deployment-67cf4f6444-5f4nh" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-5f4nh test-rolling-update-deployment-67cf4f6444- deployment-7250 /api/v1/namespaces/deployment-7250/pods/test-rolling-update-deployment-67cf4f6444-5f4nh 0659a9a0-e487-4e03-94ed-fc6980bd7b75 15938258 0 2020-05-13 21:17:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 f091852b-ca54-489c-a0f7-1766448b1ac4 0xc001db5d57 0xc001db5d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2ccd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2ccd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2ccd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:17:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:17:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.126,StartTime:2020-05-13 21:17:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 21:17:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://806b2d9b4f7deb0d306b1a8ecc7f3263ccd93a9b3ef0c9b4d43ad2cf6a19a5ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:17:56.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7250" for this suite. • [SLOW TEST:9.184 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":9,"skipped":137,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:17:56.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 13 21:17:57.057: INFO: Waiting up to 5m0s for pod "pod-ae423212-bc80-4f64-871c-89edf6f76af7" in namespace "emptydir-9579" to be "success or failure" May 13 21:17:57.063: INFO: Pod "pod-ae423212-bc80-4f64-871c-89edf6f76af7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156751ms May 13 21:17:59.067: INFO: Pod "pod-ae423212-bc80-4f64-871c-89edf6f76af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009560902s May 13 21:18:01.070: INFO: Pod "pod-ae423212-bc80-4f64-871c-89edf6f76af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012620409s STEP: Saw pod success May 13 21:18:01.070: INFO: Pod "pod-ae423212-bc80-4f64-871c-89edf6f76af7" satisfied condition "success or failure" May 13 21:18:01.072: INFO: Trying to get logs from node jerma-worker2 pod pod-ae423212-bc80-4f64-871c-89edf6f76af7 container test-container: STEP: delete the pod May 13 21:18:01.135: INFO: Waiting for pod pod-ae423212-bc80-4f64-871c-89edf6f76af7 to disappear May 13 21:18:01.141: INFO: Pod pod-ae423212-bc80-4f64-871c-89edf6f76af7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:01.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9579" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":146,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:18:02.101: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:18:04.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:18:06.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001482, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:18:09.154: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:18:09.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6432-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:10.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1387" for this suite. STEP: Destroying namespace "webhook-1387-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.325 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":11,"skipped":150,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:10.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 13 21:18:10.601: INFO: Waiting up to 5m0s for pod "var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7" in namespace "var-expansion-5065" to be "success or failure" May 13 21:18:10.610: INFO: Pod "var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.596472ms May 13 21:18:12.614: INFO: Pod "var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013323901s May 13 21:18:14.617: INFO: Pod "var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016596486s STEP: Saw pod success May 13 21:18:14.617: INFO: Pod "var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7" satisfied condition "success or failure" May 13 21:18:14.619: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7 container dapi-container: STEP: delete the pod May 13 21:18:14.690: INFO: Waiting for pod var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7 to disappear May 13 21:18:14.711: INFO: Pod var-expansion-30f5b027-de6f-49cb-86ce-2d00eb1d23f7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5065" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:14.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0513 21:18:26.461054 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 21:18:26.461: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:26.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8857" for this suite. • [SLOW TEST:11.748 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":13,"skipped":184,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:26.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:18:31.606: INFO: Waiting up to 5m0s for pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45" in namespace "pods-3236" to be "success or failure" May 13 21:18:31.617: INFO: Pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45": Phase="Pending", Reason="", readiness=false. Elapsed: 10.353872ms May 13 21:18:33.792: INFO: Pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185107961s May 13 21:18:35.977: INFO: Pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370755824s May 13 21:18:37.980: INFO: Pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.373859821s STEP: Saw pod success May 13 21:18:37.980: INFO: Pod "client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45" satisfied condition "success or failure" May 13 21:18:37.983: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45 container env3cont: STEP: delete the pod May 13 21:18:38.055: INFO: Waiting for pod client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45 to disappear May 13 21:18:38.066: INFO: Pod client-envvars-4e7562ff-46c0-417b-b523-dca83681cd45 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:38.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3236" for this suite. • [SLOW TEST:11.604 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":200,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:38.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 21:18:38.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1162' May 13 21:18:38.295: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 13 21:18:38.295: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 13 21:18:42.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1162' May 13 21:18:42.552: INFO: stderr: "" May 13 21:18:42.552: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:18:42.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1162" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":15,"skipped":205,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:18:42.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 13 21:18:42.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8084' May 13 21:18:43.002: INFO: stderr: "" May 13 21:18:43.002: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:18:43.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8084' May 13 21:18:43.129: INFO: stderr: "" May 13 21:18:43.129: INFO: stdout: "update-demo-nautilus-dwpst update-demo-nautilus-fk295 " May 13 21:18:43.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:18:43.230: INFO: stderr: "" May 13 21:18:43.230: INFO: stdout: "" May 13 21:18:43.230: INFO: update-demo-nautilus-dwpst is created but not running May 13 21:18:48.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8084' May 13 21:18:48.358: INFO: stderr: "" May 13 21:18:48.358: INFO: stdout: "update-demo-nautilus-dwpst update-demo-nautilus-fk295 " May 13 21:18:48.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:18:48.451: INFO: stderr: "" May 13 21:18:48.451: INFO: stdout: "true" May 13 21:18:48.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:18:48.555: INFO: stderr: "" May 13 21:18:48.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:18:48.555: INFO: validating pod update-demo-nautilus-dwpst May 13 21:18:48.594: INFO: got data: { "image": "nautilus.jpg" } May 13 21:18:48.594: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:18:48.594: INFO: update-demo-nautilus-dwpst is verified up and running May 13 21:18:48.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fk295 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:18:48.680: INFO: stderr: "" May 13 21:18:48.680: INFO: stdout: "true" May 13 21:18:48.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fk295 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:18:48.761: INFO: stderr: "" May 13 21:18:48.761: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:18:48.761: INFO: validating pod update-demo-nautilus-fk295 May 13 21:18:48.764: INFO: got data: { "image": "nautilus.jpg" } May 13 21:18:48.764: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:18:48.764: INFO: update-demo-nautilus-fk295 is verified up and running STEP: rolling-update to new replication controller May 13 21:18:48.766: INFO: scanned /root for discovery docs: May 13 21:18:48.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8084' May 13 21:19:12.387: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 13 21:19:12.387: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:19:12.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8084' May 13 21:19:12.488: INFO: stderr: "" May 13 21:19:12.488: INFO: stdout: "update-demo-kitten-fm78v update-demo-kitten-wxs6k " May 13 21:19:12.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fm78v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:19:12.583: INFO: stderr: "" May 13 21:19:12.583: INFO: stdout: "true" May 13 21:19:12.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fm78v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:19:12.692: INFO: stderr: "" May 13 21:19:12.692: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 13 21:19:12.692: INFO: validating pod update-demo-kitten-fm78v May 13 21:19:12.697: INFO: got data: { "image": "kitten.jpg" } May 13 21:19:12.697: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 13 21:19:12.697: INFO: update-demo-kitten-fm78v is verified up and running May 13 21:19:12.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wxs6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:19:12.799: INFO: stderr: "" May 13 21:19:12.799: INFO: stdout: "true" May 13 21:19:12.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wxs6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8084' May 13 21:19:12.897: INFO: stderr: "" May 13 21:19:12.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 13 21:19:12.897: INFO: validating pod update-demo-kitten-wxs6k May 13 21:19:12.909: INFO: got data: { "image": "kitten.jpg" } May 13 21:19:12.909: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 13 21:19:12.909: INFO: update-demo-kitten-wxs6k is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:19:12.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8084" for this suite. • [SLOW TEST:30.354 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":16,"skipped":221,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:19:12.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 13 21:19:12.954: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:19:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-882" for this suite. • [SLOW TEST:15.344 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":17,"skipped":226,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:19:28.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-da29a6ad-896d-4675-a442-80139aa0d0cf STEP: Creating a pod to test consume configMaps May 13 21:19:28.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da" in namespace "configmap-8052" to be "success or failure" May 13 21:19:28.444: INFO: Pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da": Phase="Pending", Reason="", readiness=false. Elapsed: 16.266899ms May 13 21:19:30.474: INFO: Pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046538422s May 13 21:19:32.479: INFO: Pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da": Phase="Running", Reason="", readiness=true. Elapsed: 4.05104114s May 13 21:19:34.483: INFO: Pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055372082s STEP: Saw pod success May 13 21:19:34.483: INFO: Pod "pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da" satisfied condition "success or failure" May 13 21:19:34.486: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da container configmap-volume-test: STEP: delete the pod May 13 21:19:34.521: INFO: Waiting for pod pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da to disappear May 13 21:19:34.534: INFO: Pod pod-configmaps-f79bb747-a8ea-4610-994d-46721e4a05da no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:19:34.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8052" for this suite. • [SLOW TEST:6.286 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:19:34.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 13 21:19:34.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939116 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 13 21:19:34.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939117 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 13 21:19:34.718: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939118 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 13 21:19:44.783: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939159 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 13 21:19:44.784: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939160 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 13 21:19:44.784: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4966 /api/v1/namespaces/watch-4966/configmaps/e2e-watch-test-label-changed fa64b06f-779a-4d19-ad9c-8d8eaaa58ab3 15939161 0 2020-05-13 21:19:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:19:44.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4966" for this suite. • [SLOW TEST:10.382 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":19,"skipped":258,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:19:44.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-9c845f83-5605-4339-a27e-877d8d76d40a in namespace container-probe-5903 May 13 21:19:49.047: INFO: Started pod liveness-9c845f83-5605-4339-a27e-877d8d76d40a in namespace container-probe-5903 STEP: checking the pod's current state and verifying that restartCount is present May 13 21:19:49.050: INFO: Initial restart count of pod liveness-9c845f83-5605-4339-a27e-877d8d76d40a is 0 May 13 21:20:03.113: INFO: Restart count of pod container-probe-5903/liveness-9c845f83-5605-4339-a27e-877d8d76d40a is now 1 (14.063651207s elapsed) May 13 21:20:23.150: INFO: Restart count of pod container-probe-5903/liveness-9c845f83-5605-4339-a27e-877d8d76d40a is now 2 (34.100640297s elapsed) May 13 21:20:43.191: INFO: Restart count of pod container-probe-5903/liveness-9c845f83-5605-4339-a27e-877d8d76d40a is now 3 (54.141700443s elapsed) May 13 21:21:03.231: INFO: Restart count of pod container-probe-5903/liveness-9c845f83-5605-4339-a27e-877d8d76d40a is now 4 (1m14.181376044s elapsed) May 13 21:22:03.416: INFO: Restart count of pod container-probe-5903/liveness-9c845f83-5605-4339-a27e-877d8d76d40a is now 5 (2m14.366446257s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:22:03.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5903" for this suite. • [SLOW TEST:138.533 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":264,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:22:03.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:22:03.578: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 19.973929ms) May 13 21:22:03.628: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 50.260718ms) May 13 21:22:03.632: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.791482ms) May 13 21:22:03.635: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.137489ms) May 13 21:22:03.638: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.295914ms) May 13 21:22:03.642: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.41051ms) May 13 21:22:03.645: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.864604ms) May 13 21:22:03.648: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.939289ms) May 13 21:22:03.651: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.892584ms) May 13 21:22:03.654: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.92107ms) May 13 21:22:03.746: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 92.271528ms) May 13 21:22:03.759: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 12.931852ms) May 13 21:22:03.909: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 149.640663ms) May 13 21:22:04.124: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 215.301587ms) May 13 21:22:04.127: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.46857ms) May 13 21:22:04.129: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.408826ms) May 13 21:22:04.131: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.303095ms) May 13 21:22:04.133: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.100069ms) May 13 21:22:04.136: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.083772ms) May 13 21:22:04.138: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.106373ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:22:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3884" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":21,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:22:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:22:41.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7791" for this suite. • [SLOW TEST:37.006 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":291,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:22:41.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:22:41.212: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 21:22:43.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8001 create -f -' May 13 21:22:48.464: INFO: stderr: "" May 13 21:22:48.464: INFO: stdout: "e2e-test-crd-publish-openapi-8580-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 13 21:22:48.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8001 delete e2e-test-crd-publish-openapi-8580-crds test-cr' May 13 21:22:48.617: INFO: stderr: "" May 13 21:22:48.617: INFO: stdout: "e2e-test-crd-publish-openapi-8580-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 13 21:22:48.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8001 apply -f -' May 13 21:22:48.873: INFO: stderr: "" May 13 21:22:48.873: INFO: stdout: "e2e-test-crd-publish-openapi-8580-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 13 21:22:48.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8001 delete e2e-test-crd-publish-openapi-8580-crds test-cr' May 13 21:22:48.985: INFO: stderr: "" May 13 21:22:48.985: INFO: stdout: "e2e-test-crd-publish-openapi-8580-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 13 21:22:48.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8580-crds' May 13 21:22:49.215: INFO: stderr: "" May 13 21:22:49.215: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8580-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:22:52.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8001" for this suite. • [SLOW TEST:10.976 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":23,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:22:52.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-8bf13128-f699-4ce1-891a-2607e47517e2 STEP: Creating a pod to test consume configMaps May 13 21:22:52.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a" in namespace "configmap-7448" to be "success or failure" May 13 21:22:52.220: INFO: Pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.776876ms May 13 21:22:54.223: INFO: Pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007060843s May 13 21:22:56.228: INFO: Pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011429843s May 13 21:22:58.231: INFO: Pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015280053s STEP: Saw pod success May 13 21:22:58.232: INFO: Pod "pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a" satisfied condition "success or failure" May 13 21:22:58.234: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a container configmap-volume-test: STEP: delete the pod May 13 21:22:58.269: INFO: Waiting for pod pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a to disappear May 13 21:22:58.274: INFO: Pod pod-configmaps-dd229908-f05c-4dd7-8b3f-315aa1f6c06a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:22:58.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7448" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":324,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:22:58.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 21:23:02.386: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:23:02.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1817" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":333,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:23:02.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1481 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1481 STEP: creating replication controller externalsvc in namespace services-1481 I0513 21:23:02.987427 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1481, replica count: 2 I0513 21:23:06.037867 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:23:09.038109 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 13 21:23:09.103: INFO: Creating new exec pod May 13 21:23:13.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1481 execpodrscp6 -- /bin/sh -x -c nslookup nodeport-service' May 13 21:23:13.460: INFO: stderr: "I0513 21:23:13.257679 1445 log.go:172] (0xc0000d9080) (0xc0009b0280) Create stream\nI0513 21:23:13.257729 1445 log.go:172] (0xc0000d9080) (0xc0009b0280) Stream added, broadcasting: 1\nI0513 21:23:13.261041 1445 log.go:172] (0xc0000d9080) Reply frame received for 1\nI0513 21:23:13.261079 1445 log.go:172] (0xc0000d9080) (0xc0009a8000) Create stream\nI0513 21:23:13.261094 1445 log.go:172] (0xc0000d9080) (0xc0009a8000) Stream added, broadcasting: 3\nI0513 21:23:13.261948 1445 log.go:172] (0xc0000d9080) Reply frame received for 3\nI0513 21:23:13.261974 1445 log.go:172] (0xc0000d9080) (0xc00059e820) Create stream\nI0513 21:23:13.261982 1445 log.go:172] (0xc0000d9080) (0xc00059e820) Stream added, broadcasting: 5\nI0513 21:23:13.262548 1445 log.go:172] (0xc0000d9080) Reply frame received for 5\nI0513 21:23:13.386083 1445 log.go:172] (0xc0000d9080) Data frame received for 5\nI0513 21:23:13.386109 1445 log.go:172] (0xc00059e820) (5) Data frame handling\nI0513 21:23:13.386128 1445 log.go:172] (0xc00059e820) (5) Data frame sent\n+ nslookup nodeport-service\nI0513 21:23:13.452332 1445 log.go:172] (0xc0000d9080) Data frame received for 3\nI0513 21:23:13.452357 1445 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0513 21:23:13.452374 1445 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0513 21:23:13.453682 1445 log.go:172] (0xc0000d9080) Data frame received for 3\nI0513 21:23:13.453709 1445 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0513 21:23:13.453732 1445 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0513 21:23:13.454089 1445 log.go:172] (0xc0000d9080) Data frame received for 3\nI0513 21:23:13.454104 1445 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0513 21:23:13.454227 1445 log.go:172] (0xc0000d9080) Data frame received for 5\nI0513 21:23:13.454246 1445 log.go:172] (0xc00059e820) (5) Data frame handling\nI0513 21:23:13.455799 1445 log.go:172] (0xc0000d9080) Data frame received for 1\nI0513 21:23:13.455810 1445 log.go:172] (0xc0009b0280) (1) Data frame handling\nI0513 21:23:13.455819 1445 log.go:172] (0xc0009b0280) (1) Data frame sent\nI0513 21:23:13.455826 1445 log.go:172] (0xc0000d9080) (0xc0009b0280) Stream removed, broadcasting: 1\nI0513 21:23:13.455832 1445 log.go:172] (0xc0000d9080) Go away received\nI0513 21:23:13.456233 1445 log.go:172] (0xc0000d9080) (0xc0009b0280) Stream removed, broadcasting: 1\nI0513 21:23:13.456256 1445 log.go:172] (0xc0000d9080) (0xc0009a8000) Stream removed, broadcasting: 3\nI0513 21:23:13.456268 1445 log.go:172] (0xc0000d9080) (0xc00059e820) Stream removed, broadcasting: 5\n" May 13 21:23:13.461: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1481.svc.cluster.local\tcanonical name = externalsvc.services-1481.svc.cluster.local.\nName:\texternalsvc.services-1481.svc.cluster.local\nAddress: 10.99.224.248\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1481, will wait for the garbage collector to delete the pods May 13 21:23:13.520: INFO: Deleting ReplicationController externalsvc took: 6.21794ms May 13 21:23:13.820: INFO: Terminating ReplicationController externalsvc pods took: 300.227979ms May 13 21:23:29.545: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:23:29.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1481" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.134 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":26,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:23:29.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:29.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-688" for this suite. • [SLOW TEST:60.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":366,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:29.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 13 21:24:30.517: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 13 21:24:32.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001870, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001870, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:24:35.907: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:24:35.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:37.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6278" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.566 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":28,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:37.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 13 21:24:37.320: INFO: Waiting up to 5m0s for pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d" in namespace "downward-api-4896" to be "success or failure" May 13 21:24:37.345: INFO: Pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.22222ms May 13 21:24:39.348: INFO: Pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028080516s May 13 21:24:41.351: INFO: Pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031815069s May 13 21:24:43.355: INFO: Pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035262807s STEP: Saw pod success May 13 21:24:43.355: INFO: Pod "downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d" satisfied condition "success or failure" May 13 21:24:43.357: INFO: Trying to get logs from node jerma-worker pod downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d container dapi-container: STEP: delete the pod May 13 21:24:43.385: INFO: Waiting for pod downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d to disappear May 13 21:24:43.432: INFO: Pod downward-api-404ddf03-3f7f-4917-90ef-861d8a29c99d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:43.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4896" for this suite. • [SLOW TEST:6.211 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":460,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:43.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 13 21:24:43.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 13 21:24:43.635: INFO: stderr: "" May 13 21:24:43.635: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:43.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6956" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":30,"skipped":475,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:43.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:47.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6962" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:47.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:24:48.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:24:50.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001888, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001888, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001888, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001888, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:24:53.624: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:24:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4730" for this suite. STEP: Destroying namespace "webhook-4730-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":32,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:24:53.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:24:53.996: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 21:24:57.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 create -f -' May 13 21:25:00.546: INFO: stderr: "" May 13 21:25:00.546: INFO: stdout: "e2e-test-crd-publish-openapi-4031-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 13 21:25:00.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 delete e2e-test-crd-publish-openapi-4031-crds test-cr' May 13 21:25:00.655: INFO: stderr: "" May 13 21:25:00.655: INFO: stdout: "e2e-test-crd-publish-openapi-4031-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 13 21:25:00.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 apply -f -' May 13 21:25:00.922: INFO: stderr: "" May 13 21:25:00.922: INFO: stdout: "e2e-test-crd-publish-openapi-4031-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 13 21:25:00.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 delete e2e-test-crd-publish-openapi-4031-crds test-cr' May 13 21:25:01.032: INFO: stderr: "" May 13 21:25:01.033: INFO: stdout: "e2e-test-crd-publish-openapi-4031-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 13 21:25:01.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4031-crds' May 13 21:25:01.298: INFO: stderr: "" May 13 21:25:01.298: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4031-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:04.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4351" for this suite. • [SLOW TEST:10.367 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":33,"skipped":591,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:04.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:25:04.294: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:04.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4637" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":34,"skipped":599,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:04.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 13 21:25:05.150: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 13 21:25:15.609: INFO: >>> kubeConfig: /root/.kube/config May 13 21:25:17.563: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:29.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9933" for this suite. • [SLOW TEST:24.083 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":35,"skipped":600,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:29.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:29.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2430" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:29.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 13 21:25:29.572: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 13 21:25:30.131: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 13 21:25:32.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:25:34.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001930, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:25:37.224: INFO: Waited 622.251877ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:37.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2763" for this suite. • [SLOW TEST:8.436 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":37,"skipped":625,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:37.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 21:25:43.450: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:25:43.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1035" for this suite. • [SLOW TEST:5.909 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:25:43.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bf95 STEP: Creating a pod to test atomic-volume-subpath May 13 21:25:43.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bf95" in namespace "subpath-5259" to be "success or failure" May 13 21:25:43.910: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.212977ms May 13 21:25:45.914: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006892968s May 13 21:25:47.917: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 4.010513355s May 13 21:25:49.921: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 6.014087672s May 13 21:25:51.924: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 8.017133266s May 13 21:25:53.927: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 10.020364113s May 13 21:25:55.931: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 12.024289196s May 13 21:25:57.936: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 14.029402501s May 13 21:25:59.941: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 16.034095936s May 13 21:26:01.946: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 18.03876657s May 13 21:26:03.949: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 20.042576978s May 13 21:26:05.953: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Running", Reason="", readiness=true. Elapsed: 22.046168926s May 13 21:26:07.957: INFO: Pod "pod-subpath-test-configmap-bf95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050215877s STEP: Saw pod success May 13 21:26:07.957: INFO: Pod "pod-subpath-test-configmap-bf95" satisfied condition "success or failure" May 13 21:26:07.959: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-bf95 container test-container-subpath-configmap-bf95: STEP: delete the pod May 13 21:26:08.010: INFO: Waiting for pod pod-subpath-test-configmap-bf95 to disappear May 13 21:26:08.124: INFO: Pod pod-subpath-test-configmap-bf95 no longer exists STEP: Deleting pod pod-subpath-test-configmap-bf95 May 13 21:26:08.124: INFO: Deleting pod "pod-subpath-test-configmap-bf95" in namespace "subpath-5259" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:26:08.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5259" for this suite. • [SLOW TEST:24.348 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":39,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:26:08.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 13 21:26:08.286: INFO: Waiting up to 5m0s for pod "downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9" in namespace "downward-api-2095" to be "success or failure" May 13 21:26:08.316: INFO: Pod "downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.855283ms May 13 21:26:10.320: INFO: Pod "downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033635839s May 13 21:26:12.324: INFO: Pod "downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037221426s STEP: Saw pod success May 13 21:26:12.324: INFO: Pod "downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9" satisfied condition "success or failure" May 13 21:26:12.326: INFO: Trying to get logs from node jerma-worker pod downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9 container dapi-container: STEP: delete the pod May 13 21:26:12.347: INFO: Waiting for pod downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9 to disappear May 13 21:26:12.352: INFO: Pod downward-api-e6e8cb05-b0be-4d79-ba35-f500cfbaf9a9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:26:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2095" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:26:12.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 13 21:26:12.444: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 21:26:12.455: INFO: Waiting for terminating namespaces to be deleted... May 13 21:26:12.458: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 13 21:26:12.463: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 21:26:12.463: INFO: Container kindnet-cni ready: true, restart count 0 May 13 21:26:12.463: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 21:26:12.463: INFO: Container kube-proxy ready: true, restart count 0 May 13 21:26:12.463: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 13 21:26:12.483: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 21:26:12.483: INFO: Container kube-proxy ready: true, restart count 0 May 13 21:26:12.483: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 13 21:26:12.483: INFO: Container kube-hunter ready: false, restart count 0 May 13 21:26:12.483: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 21:26:12.483: INFO: Container kindnet-cni ready: true, restart count 0 May 13 21:26:12.483: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 13 21:26:12.483: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d5ed3649-c21e-4b3a-9712-1aa84a86935e 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-d5ed3649-c21e-4b3a-9712-1aa84a86935e off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d5ed3649-c21e-4b3a-9712-1aa84a86935e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:26:28.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1125" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":41,"skipped":778,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:26:28.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:26:29.215: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:26:31.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001989, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001989, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001989, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725001989, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:26:34.254: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:26:34.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1498" for this suite. STEP: Destroying namespace "webhook-1498-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":42,"skipped":790,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:26:34.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:26:41.710: INFO: DNS probes using dns-test-d5c07d79-b0ea-4d15-a053-ba47396e8bcd succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:26:50.306: INFO: DNS probes using dns-test-3b579f17-fdcd-48d3-8259-ccfc30c14c93 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5115.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5115.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:26:58.756: INFO: DNS probes using dns-test-f3240510-c554-4152-8c98-dcea645efbaf succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:26:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5115" for this suite. • [SLOW TEST:24.680 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":43,"skipped":792,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:26:59.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 13 21:26:59.646: INFO: Waiting up to 5m0s for pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418" in namespace "var-expansion-7140" to be "success or failure" May 13 21:26:59.745: INFO: Pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418": Phase="Pending", Reason="", readiness=false. Elapsed: 99.164397ms May 13 21:27:01.757: INFO: Pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111375755s May 13 21:27:03.765: INFO: Pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11921699s May 13 21:27:05.786: INFO: Pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140441876s STEP: Saw pod success May 13 21:27:05.786: INFO: Pod "var-expansion-276e4333-4df6-448c-a994-292ec7f1a418" satisfied condition "success or failure" May 13 21:27:05.789: INFO: Trying to get logs from node jerma-worker pod var-expansion-276e4333-4df6-448c-a994-292ec7f1a418 container dapi-container: STEP: delete the pod May 13 21:27:05.805: INFO: Waiting for pod var-expansion-276e4333-4df6-448c-a994-292ec7f1a418 to disappear May 13 21:27:05.822: INFO: Pod var-expansion-276e4333-4df6-448c-a994-292ec7f1a418 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:27:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7140" for this suite. • [SLOW TEST:6.326 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:27:05.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:27:05.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374" in namespace "downward-api-6948" to be "success or failure" May 13 21:27:05.984: INFO: Pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374": Phase="Pending", Reason="", readiness=false. Elapsed: 3.415391ms May 13 21:27:07.988: INFO: Pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007725819s May 13 21:27:09.991: INFO: Pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374": Phase="Running", Reason="", readiness=true. Elapsed: 4.01112458s May 13 21:27:12.008: INFO: Pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028080944s STEP: Saw pod success May 13 21:27:12.008: INFO: Pod "downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374" satisfied condition "success or failure" May 13 21:27:12.011: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374 container client-container: STEP: delete the pod May 13 21:27:12.034: INFO: Waiting for pod downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374 to disappear May 13 21:27:12.093: INFO: Pod downwardapi-volume-4e8cc579-18cd-4e27-817b-16a5435f6374 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:27:12.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6948" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":835,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:27:12.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:27:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4562" for this suite. • [SLOW TEST:11.135 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":46,"skipped":835,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:27:23.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 13 21:27:23.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941541 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 13 21:27:23.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941541 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 13 21:27:33.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941578 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 13 21:27:33.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941578 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 13 21:27:43.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941608 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 13 21:27:43.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941608 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 13 21:27:53.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941638 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 13 21:27:53.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-a 52f81384-e43c-40ef-91c7-1b4a106faec6 15941638 0 2020-05-13 21:27:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 13 21:28:03.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-b cbba5b64-71bf-47a8-bf46-c6a9028d3b45 15941667 0 2020-05-13 21:28:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 13 21:28:03.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-b cbba5b64-71bf-47a8-bf46-c6a9028d3b45 15941667 0 2020-05-13 21:28:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 13 21:28:13.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-b cbba5b64-71bf-47a8-bf46-c6a9028d3b45 15941696 0 2020-05-13 21:28:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 13 21:28:13.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5679 /api/v1/namespaces/watch-5679/configmaps/e2e-watch-test-configmap-b cbba5b64-71bf-47a8-bf46-c6a9028d3b45 15941696 0 2020-05-13 21:28:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:28:23.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5679" for this suite. • [SLOW TEST:60.123 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":47,"skipped":846,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:28:23.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0513 21:28:54.320172 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 21:28:54.320: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:28:54.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4146" for this suite. • [SLOW TEST:30.970 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":48,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:28:54.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 13 21:28:54.381: INFO: >>> kubeConfig: /root/.kube/config May 13 21:28:56.331: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:07.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9851" for this suite. • [SLOW TEST:13.453 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":49,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:07.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-pncc STEP: Creating a pod to test atomic-volume-subpath May 13 21:29:07.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pncc" in namespace "subpath-3085" to be "success or failure" May 13 21:29:07.885: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647506ms May 13 21:29:09.920: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038862724s May 13 21:29:11.924: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 4.042641818s May 13 21:29:13.928: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 6.046510208s May 13 21:29:15.932: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 8.050423054s May 13 21:29:17.936: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 10.054663379s May 13 21:29:19.951: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 12.069282393s May 13 21:29:21.955: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 14.07327989s May 13 21:29:23.958: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 16.076766775s May 13 21:29:25.962: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 18.080750513s May 13 21:29:27.967: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 20.085065652s May 13 21:29:29.971: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Running", Reason="", readiness=true. Elapsed: 22.089487175s May 13 21:29:31.975: INFO: Pod "pod-subpath-test-projected-pncc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093252555s STEP: Saw pod success May 13 21:29:31.975: INFO: Pod "pod-subpath-test-projected-pncc" satisfied condition "success or failure" May 13 21:29:31.978: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-pncc container test-container-subpath-projected-pncc: STEP: delete the pod May 13 21:29:32.026: INFO: Waiting for pod pod-subpath-test-projected-pncc to disappear May 13 21:29:32.048: INFO: Pod pod-subpath-test-projected-pncc no longer exists STEP: Deleting pod pod-subpath-test-projected-pncc May 13 21:29:32.049: INFO: Deleting pod "pod-subpath-test-projected-pncc" in namespace "subpath-3085" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:32.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3085" for this suite. • [SLOW TEST:24.275 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":50,"skipped":947,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:32.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:29:32.253: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b6275bd5-3201-45f3-aa24-fca6ed5c5cd3", Controller:(*bool)(0xc00337806a), BlockOwnerDeletion:(*bool)(0xc00337806b)}} May 13 21:29:32.299: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2914b00d-65d5-469a-9956-8063d1488126", Controller:(*bool)(0xc0034d7ba2), BlockOwnerDeletion:(*bool)(0xc0034d7ba3)}} May 13 21:29:32.307: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4c129828-0fda-4835-af64-60c0260e43c1", Controller:(*bool)(0xc0034d7e0a), BlockOwnerDeletion:(*bool)(0xc0034d7e0b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:37.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7359" for this suite. • [SLOW TEST:5.429 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":51,"skipped":964,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:37.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:29:38.552: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:29:40.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002178, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002178, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002178, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002178, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:29:43.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 13 21:29:43.717: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:43.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4529" for this suite. STEP: Destroying namespace "webhook-4529-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.462 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":52,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:43.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:29:44.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e" in namespace "projected-8981" to be "success or failure" May 13 21:29:44.307: INFO: Pod "downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e": Phase="Pending", Reason="", readiness=false. Elapsed: 171.689366ms May 13 21:29:46.311: INFO: Pod "downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175163199s May 13 21:29:48.314: INFO: Pod "downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178058097s STEP: Saw pod success May 13 21:29:48.314: INFO: Pod "downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e" satisfied condition "success or failure" May 13 21:29:48.316: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e container client-container: STEP: delete the pod May 13 21:29:48.468: INFO: Waiting for pod downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e to disappear May 13 21:29:48.474: INFO: Pod downwardapi-volume-8fc9c5f9-d14a-4f43-8894-f01935cca30e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:48.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8981" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":989,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:48.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:48.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7056" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":54,"skipped":999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:48.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-f9253896-0fb8-4707-83d5-2e0dd27ac776 STEP: Creating secret with name secret-projected-all-test-volume-ccd5a6df-838c-4aec-9dcc-95e8763fefe4 STEP: Creating a pod to test Check all projections for projected volume plugin May 13 21:29:48.644: INFO: Waiting up to 5m0s for pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4" in namespace "projected-6025" to be "success or failure" May 13 21:29:48.668: INFO: Pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.113513ms May 13 21:29:50.672: INFO: Pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027682553s May 13 21:29:52.676: INFO: Pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4": Phase="Running", Reason="", readiness=true. Elapsed: 4.032172811s May 13 21:29:54.680: INFO: Pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036065511s STEP: Saw pod success May 13 21:29:54.680: INFO: Pod "projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4" satisfied condition "success or failure" May 13 21:29:54.683: INFO: Trying to get logs from node jerma-worker pod projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4 container projected-all-volume-test: STEP: delete the pod May 13 21:29:54.703: INFO: Waiting for pod projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4 to disappear May 13 21:29:54.748: INFO: Pod projected-volume-3f26532c-75df-4448-8d6a-4fe843f33ca4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:29:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6025" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1027,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:29:54.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6523 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6523;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6523 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6523;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6523.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6523.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6523.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6523.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6523.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6523.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6523.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.110.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.110.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.110.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.110.204_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6523 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6523;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6523 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6523;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6523.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6523.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6523.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6523.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6523.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6523.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6523.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6523.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6523.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.110.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.110.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.110.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.110.204_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:30:02.967: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.970: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.972: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.979: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.982: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:02.984: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.000: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.005: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.011: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.013: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.018: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:03.033: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:08.038: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.041: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.047: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.075: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.077: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.080: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.082: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.085: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.087: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.091: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:08.106: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:13.037: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.046: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.048: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.053: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.055: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.071: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.074: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.076: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.079: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.082: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.091: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:13.115: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:18.038: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.042: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.049: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.052: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.058: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.081: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.084: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.087: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.090: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.093: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.098: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:18.119: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:23.038: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.042: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.054: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.080: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.083: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.086: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.088: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.091: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.094: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.097: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.100: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:23.122: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:28.037: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.046: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.048: INFO: Unable to read wheezy_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.050: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.052: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.054: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.068: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.071: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.073: INFO: Unable to read jessie_udp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.075: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523 from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.077: INFO: Unable to read jessie_udp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.079: INFO: Unable to read jessie_tcp@dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.081: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.084: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc from pod dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85: the server could not find the requested resource (get pods dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85) May 13 21:30:28.097: INFO: Lookups using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6523 wheezy_tcp@dns-test-service.dns-6523 wheezy_udp@dns-test-service.dns-6523.svc wheezy_tcp@dns-test-service.dns-6523.svc wheezy_udp@_http._tcp.dns-test-service.dns-6523.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6523.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6523 jessie_tcp@dns-test-service.dns-6523 jessie_udp@dns-test-service.dns-6523.svc jessie_tcp@dns-test-service.dns-6523.svc jessie_udp@_http._tcp.dns-test-service.dns-6523.svc jessie_tcp@_http._tcp.dns-test-service.dns-6523.svc] May 13 21:30:33.162: INFO: DNS probes using dns-6523/dns-test-e676d13a-f7d2-4c92-8845-3adbff8d4e85 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:30:33.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6523" for this suite. • [SLOW TEST:39.146 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":56,"skipped":1033,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:30:33.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 13 21:30:34.006: INFO: Waiting up to 5m0s for pod "pod-6bd85c13-624c-44fe-aba3-3aed30c16ade" in namespace "emptydir-4660" to be "success or failure" May 13 21:30:34.015: INFO: Pod "pod-6bd85c13-624c-44fe-aba3-3aed30c16ade": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464205ms May 13 21:30:36.078: INFO: Pod "pod-6bd85c13-624c-44fe-aba3-3aed30c16ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072175502s May 13 21:30:38.082: INFO: Pod "pod-6bd85c13-624c-44fe-aba3-3aed30c16ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07583099s STEP: Saw pod success May 13 21:30:38.082: INFO: Pod "pod-6bd85c13-624c-44fe-aba3-3aed30c16ade" satisfied condition "success or failure" May 13 21:30:38.085: INFO: Trying to get logs from node jerma-worker2 pod pod-6bd85c13-624c-44fe-aba3-3aed30c16ade container test-container: STEP: delete the pod May 13 21:30:38.112: INFO: Waiting for pod pod-6bd85c13-624c-44fe-aba3-3aed30c16ade to disappear May 13 21:30:38.246: INFO: Pod pod-6bd85c13-624c-44fe-aba3-3aed30c16ade no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:30:38.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4660" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1042,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:30:38.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 13 21:30:38.310: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:30:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3606" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":58,"skipped":1051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:30:38.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0513 21:31:19.636577 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 21:31:19.636: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:31:19.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-821" for this suite. • [SLOW TEST:41.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":59,"skipped":1080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:31:19.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 13 21:31:19.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6289' May 13 21:31:20.118: INFO: stderr: "" May 13 21:31:20.118: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:31:20.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6289' May 13 21:31:20.238: INFO: stderr: "" May 13 21:31:20.238: INFO: stdout: "update-demo-nautilus-4mgnq update-demo-nautilus-w2sx6 " May 13 21:31:20.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mgnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:20.322: INFO: stderr: "" May 13 21:31:20.322: INFO: stdout: "" May 13 21:31:20.322: INFO: update-demo-nautilus-4mgnq is created but not running May 13 21:31:25.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6289' May 13 21:31:25.436: INFO: stderr: "" May 13 21:31:25.437: INFO: stdout: "update-demo-nautilus-4mgnq update-demo-nautilus-w2sx6 " May 13 21:31:25.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mgnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:25.519: INFO: stderr: "" May 13 21:31:25.519: INFO: stdout: "true" May 13 21:31:25.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mgnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:25.649: INFO: stderr: "" May 13 21:31:25.649: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:31:25.649: INFO: validating pod update-demo-nautilus-4mgnq May 13 21:31:25.659: INFO: got data: { "image": "nautilus.jpg" } May 13 21:31:25.659: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:31:25.659: INFO: update-demo-nautilus-4mgnq is verified up and running May 13 21:31:25.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2sx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:25.847: INFO: stderr: "" May 13 21:31:25.848: INFO: stdout: "" May 13 21:31:25.848: INFO: update-demo-nautilus-w2sx6 is created but not running May 13 21:31:30.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6289' May 13 21:31:30.959: INFO: stderr: "" May 13 21:31:30.959: INFO: stdout: "update-demo-nautilus-4mgnq update-demo-nautilus-w2sx6 " May 13 21:31:30.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mgnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:31.046: INFO: stderr: "" May 13 21:31:31.046: INFO: stdout: "true" May 13 21:31:31.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mgnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:31.134: INFO: stderr: "" May 13 21:31:31.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:31:31.134: INFO: validating pod update-demo-nautilus-4mgnq May 13 21:31:31.137: INFO: got data: { "image": "nautilus.jpg" } May 13 21:31:31.137: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:31:31.137: INFO: update-demo-nautilus-4mgnq is verified up and running May 13 21:31:31.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2sx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:31.243: INFO: stderr: "" May 13 21:31:31.243: INFO: stdout: "true" May 13 21:31:31.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2sx6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6289' May 13 21:31:31.337: INFO: stderr: "" May 13 21:31:31.337: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:31:31.337: INFO: validating pod update-demo-nautilus-w2sx6 May 13 21:31:31.341: INFO: got data: { "image": "nautilus.jpg" } May 13 21:31:31.341: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:31:31.341: INFO: update-demo-nautilus-w2sx6 is verified up and running STEP: using delete to clean up resources May 13 21:31:31.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6289' May 13 21:31:31.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:31:31.448: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 13 21:31:31.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6289' May 13 21:31:31.562: INFO: stderr: "No resources found in kubectl-6289 namespace.\n" May 13 21:31:31.562: INFO: stdout: "" May 13 21:31:31.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6289 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 21:31:31.662: INFO: stderr: "" May 13 21:31:31.662: INFO: stdout: "update-demo-nautilus-4mgnq\nupdate-demo-nautilus-w2sx6\n" May 13 21:31:32.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6289' May 13 21:31:32.268: INFO: stderr: "No resources found in kubectl-6289 namespace.\n" May 13 21:31:32.268: INFO: stdout: "" May 13 21:31:32.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6289 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 21:31:32.366: INFO: stderr: "" May 13 21:31:32.366: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:31:32.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6289" for this suite. • [SLOW TEST:12.727 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":60,"skipped":1105,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:31:32.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:31:43.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6661" for this suite. • [SLOW TEST:11.456 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":61,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:31:43.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 13 21:31:48.491: INFO: Successfully updated pod "pod-update-c5d0364d-89f0-4ced-84cc-84c31f51cd87" STEP: verifying the updated pod is in kubernetes May 13 21:31:48.499: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:31:48.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6210" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:31:48.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 13 21:31:49.126: INFO: created pod pod-service-account-defaultsa May 13 21:31:49.126: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 13 21:31:49.132: INFO: created pod pod-service-account-mountsa May 13 21:31:49.132: INFO: pod pod-service-account-mountsa service account token volume mount: true May 13 21:31:49.138: INFO: created pod pod-service-account-nomountsa May 13 21:31:49.138: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 13 21:31:49.168: INFO: created pod pod-service-account-defaultsa-mountspec May 13 21:31:49.168: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 13 21:31:49.220: INFO: created pod pod-service-account-mountsa-mountspec May 13 21:31:49.220: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 13 21:31:49.232: INFO: created pod pod-service-account-nomountsa-mountspec May 13 21:31:49.232: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 13 21:31:49.256: INFO: created pod pod-service-account-defaultsa-nomountspec May 13 21:31:49.256: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 13 21:31:49.294: INFO: created pod pod-service-account-mountsa-nomountspec May 13 21:31:49.294: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 13 21:31:49.362: INFO: created pod pod-service-account-nomountsa-nomountspec May 13 21:31:49.362: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:31:49.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5506" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":63,"skipped":1197,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:31:49.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 13 21:31:49.559: INFO: Waiting up to 5m0s for pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c" in namespace "containers-2232" to be "success or failure" May 13 21:31:49.769: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 210.026566ms May 13 21:31:51.772: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213280798s May 13 21:31:54.277: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718044092s May 13 21:31:56.303: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743944389s May 13 21:31:58.313: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753946175s May 13 21:32:01.013: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.454118725s May 13 21:32:03.500: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.940948555s May 13 21:32:05.566: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.007239909s May 13 21:32:07.572: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.01324779s STEP: Saw pod success May 13 21:32:07.572: INFO: Pod "client-containers-881ec05b-fc39-4b24-b1ed-90420770407c" satisfied condition "success or failure" May 13 21:32:07.697: INFO: Trying to get logs from node jerma-worker pod client-containers-881ec05b-fc39-4b24-b1ed-90420770407c container test-container: STEP: delete the pod May 13 21:32:07.906: INFO: Waiting for pod client-containers-881ec05b-fc39-4b24-b1ed-90420770407c to disappear May 13 21:32:07.989: INFO: Pod client-containers-881ec05b-fc39-4b24-b1ed-90420770407c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:07.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2232" for this suite. • [SLOW TEST:18.555 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1205,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:07.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:32:08.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4" in namespace "downward-api-7347" to be "success or failure" May 13 21:32:08.307: INFO: Pod "downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.086363ms May 13 21:32:10.311: INFO: Pod "downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052115237s May 13 21:32:12.403: INFO: Pod "downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143998604s STEP: Saw pod success May 13 21:32:12.403: INFO: Pod "downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4" satisfied condition "success or failure" May 13 21:32:12.407: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4 container client-container: STEP: delete the pod May 13 21:32:12.466: INFO: Waiting for pod downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4 to disappear May 13 21:32:12.571: INFO: Pod downwardapi-volume-26eadb69-2eb4-45a0-b43e-5f4932eda4c4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7347" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1208,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:12.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9cff7a67-6639-43e1-9d4c-a9fc500bde1d STEP: Creating a pod to test consume configMaps May 13 21:32:12.667: INFO: Waiting up to 5m0s for pod "pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347" in namespace "configmap-3503" to be "success or failure" May 13 21:32:12.702: INFO: Pod "pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347": Phase="Pending", Reason="", readiness=false. Elapsed: 35.443531ms May 13 21:32:14.705: INFO: Pod "pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038689928s May 13 21:32:16.710: INFO: Pod "pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043153473s STEP: Saw pod success May 13 21:32:16.710: INFO: Pod "pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347" satisfied condition "success or failure" May 13 21:32:16.713: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347 container configmap-volume-test: STEP: delete the pod May 13 21:32:16.738: INFO: Waiting for pod pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347 to disappear May 13 21:32:16.762: INFO: Pod pod-configmaps-9819dc54-a50b-4432-8cdd-cc69e9e7b347 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:16.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3503" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1223,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:16.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:32:16.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 13 21:32:17.065: INFO: stderr: "" May 13 21:32:17.065: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:17.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4759" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":67,"skipped":1234,"failed":0} ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:17.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:17.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6616" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":68,"skipped":1234,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:17.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 13 21:32:21.887: INFO: Successfully updated pod "labelsupdate7646a628-ca3d-4bdc-98fb-3f7286ddfd4f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:25.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9693" for this suite. • [SLOW TEST:8.746 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1238,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:25.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:32:42.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8116" for this suite. • [SLOW TEST:16.501 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":70,"skipped":1245,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:32:42.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-v6wh STEP: Creating a pod to test atomic-volume-subpath May 13 21:32:42.572: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v6wh" in namespace "subpath-1151" to be "success or failure" May 13 21:32:42.607: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 34.696445ms May 13 21:32:44.619: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046488019s May 13 21:32:46.637: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 4.064912718s May 13 21:32:48.643: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 6.07103889s May 13 21:32:50.645: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 8.073294383s May 13 21:32:52.648: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 10.075840359s May 13 21:32:54.652: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 12.079642719s May 13 21:32:56.655: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 14.082954574s May 13 21:32:58.660: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 16.087650352s May 13 21:33:00.664: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 18.091843181s May 13 21:33:02.668: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 20.095319355s May 13 21:33:04.671: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Running", Reason="", readiness=true. Elapsed: 22.098699351s May 13 21:33:06.675: INFO: Pod "pod-subpath-test-secret-v6wh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.103066105s STEP: Saw pod success May 13 21:33:06.675: INFO: Pod "pod-subpath-test-secret-v6wh" satisfied condition "success or failure" May 13 21:33:06.678: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-v6wh container test-container-subpath-secret-v6wh: STEP: delete the pod May 13 21:33:06.698: INFO: Waiting for pod pod-subpath-test-secret-v6wh to disappear May 13 21:33:06.703: INFO: Pod pod-subpath-test-secret-v6wh no longer exists STEP: Deleting pod pod-subpath-test-secret-v6wh May 13 21:33:06.703: INFO: Deleting pod "pod-subpath-test-secret-v6wh" in namespace "subpath-1151" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:06.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1151" for this suite. • [SLOW TEST:24.325 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":71,"skipped":1247,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:06.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:33:06.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45" in namespace "projected-912" to be "success or failure" May 13 21:33:06.890: INFO: Pod "downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45": Phase="Pending", Reason="", readiness=false. Elapsed: 16.063329ms May 13 21:33:08.900: INFO: Pod "downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026192733s May 13 21:33:10.904: INFO: Pod "downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030457104s STEP: Saw pod success May 13 21:33:10.904: INFO: Pod "downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45" satisfied condition "success or failure" May 13 21:33:10.907: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45 container client-container: STEP: delete the pod May 13 21:33:11.088: INFO: Waiting for pod downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45 to disappear May 13 21:33:11.093: INFO: Pod downwardapi-volume-493c475f-df01-4c79-b6a5-14a21cd09a45 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:11.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-912" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:11.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 13 21:33:11.287: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix926542393/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:11.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5722" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":73,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:11.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 13 21:33:11.418: INFO: Waiting up to 5m0s for pod "pod-952daa29-a3df-49c6-bd39-a67e99097888" in namespace "emptydir-6906" to be "success or failure" May 13 21:33:11.437: INFO: Pod "pod-952daa29-a3df-49c6-bd39-a67e99097888": Phase="Pending", Reason="", readiness=false. Elapsed: 18.11903ms May 13 21:33:13.441: INFO: Pod "pod-952daa29-a3df-49c6-bd39-a67e99097888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022727574s May 13 21:33:15.445: INFO: Pod "pod-952daa29-a3df-49c6-bd39-a67e99097888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026681941s STEP: Saw pod success May 13 21:33:15.445: INFO: Pod "pod-952daa29-a3df-49c6-bd39-a67e99097888" satisfied condition "success or failure" May 13 21:33:15.448: INFO: Trying to get logs from node jerma-worker2 pod pod-952daa29-a3df-49c6-bd39-a67e99097888 container test-container: STEP: delete the pod May 13 21:33:15.476: INFO: Waiting for pod pod-952daa29-a3df-49c6-bd39-a67e99097888 to disappear May 13 21:33:15.498: INFO: Pod pod-952daa29-a3df-49c6-bd39-a67e99097888 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:15.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6906" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:15.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 21:33:15.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7611' May 13 21:33:15.695: INFO: stderr: "" May 13 21:33:15.695: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 13 21:33:20.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7611 -o json' May 13 21:33:20.840: INFO: stderr: "" May 13 21:33:20.840: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-13T21:33:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7611\",\n \"resourceVersion\": \"15943576\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7611/pods/e2e-test-httpd-pod\",\n \"uid\": \"afb77ef4-3afe-40d8-b969-b59496699d93\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-qm46c\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-qm46c\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-qm46c\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-13T21:33:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-13T21:33:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-13T21:33:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-13T21:33:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e16df73d833fef8a51867fa4775b7955cc93a00753f945d27b8dff9b0e9df97a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-13T21:33:18Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.63\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.63\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-13T21:33:15Z\"\n }\n}\n" STEP: replace the image in the pod May 13 21:33:20.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7611' May 13 21:33:21.119: INFO: stderr: "" May 13 21:33:21.119: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 13 21:33:21.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7611' May 13 21:33:29.242: INFO: stderr: "" May 13 21:33:29.242: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:29.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7611" for this suite. • [SLOW TEST:13.771 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":75,"skipped":1369,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:29.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 13 21:33:33.421: INFO: &Pod{ObjectMeta:{send-events-1e6b84b5-3bad-4338-b4ce-4fc91f157738 events-6254 /api/v1/namespaces/events-6254/pods/send-events-1e6b84b5-3bad-4338-b4ce-4fc91f157738 75a03211-2486-4b96-8584-3c8bf3ce68f9 15943647 0 2020-05-13 21:33:29 +0000 UTC map[name:foo time:364467161] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xgdlp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xgdlp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xgdlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:33:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.174,StartTime:2020-05-13 21:33:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 21:33:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a71f744935e3d3c5e16dd19e6d23b5692bc50d08a035da0378d20dc4ef13baf9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 13 21:33:35.426: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 13 21:33:37.440: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:37.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6254" for this suite. • [SLOW TEST:8.216 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":76,"skipped":1390,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:37.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0513 21:33:38.618429 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 21:33:38.618: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:33:38.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4798" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":77,"skipped":1401,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:33:38.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 13 21:33:38.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-91' May 13 21:33:39.080: INFO: stderr: "" May 13 21:33:39.080: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:33:39.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:33:39.235: INFO: stderr: "" May 13 21:33:39.235: INFO: stdout: "update-demo-nautilus-656dp update-demo-nautilus-m9qc4 " May 13 21:33:39.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-656dp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:33:39.342: INFO: stderr: "" May 13 21:33:39.342: INFO: stdout: "" May 13 21:33:39.342: INFO: update-demo-nautilus-656dp is created but not running May 13 21:33:44.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:33:44.455: INFO: stderr: "" May 13 21:33:44.455: INFO: stdout: "update-demo-nautilus-656dp update-demo-nautilus-m9qc4 " May 13 21:33:44.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-656dp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:33:44.539: INFO: stderr: "" May 13 21:33:44.539: INFO: stdout: "true" May 13 21:33:44.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-656dp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:33:44.630: INFO: stderr: "" May 13 21:33:44.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:33:44.630: INFO: validating pod update-demo-nautilus-656dp May 13 21:33:44.634: INFO: got data: { "image": "nautilus.jpg" } May 13 21:33:44.634: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:33:44.634: INFO: update-demo-nautilus-656dp is verified up and running May 13 21:33:44.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:33:44.736: INFO: stderr: "" May 13 21:33:44.736: INFO: stdout: "true" May 13 21:33:44.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:33:44.828: INFO: stderr: "" May 13 21:33:44.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:33:44.828: INFO: validating pod update-demo-nautilus-m9qc4 May 13 21:33:44.832: INFO: got data: { "image": "nautilus.jpg" } May 13 21:33:44.832: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:33:44.832: INFO: update-demo-nautilus-m9qc4 is verified up and running STEP: scaling down the replication controller May 13 21:33:44.834: INFO: scanned /root for discovery docs: May 13 21:33:44.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-91' May 13 21:33:45.942: INFO: stderr: "" May 13 21:33:45.942: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:33:45.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:33:46.035: INFO: stderr: "" May 13 21:33:46.035: INFO: stdout: "update-demo-nautilus-656dp update-demo-nautilus-m9qc4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 13 21:33:51.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:33:51.147: INFO: stderr: "" May 13 21:33:51.147: INFO: stdout: "update-demo-nautilus-656dp update-demo-nautilus-m9qc4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 13 21:33:56.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:33:56.235: INFO: stderr: "" May 13 21:33:56.235: INFO: stdout: "update-demo-nautilus-656dp update-demo-nautilus-m9qc4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 13 21:34:01.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:34:01.336: INFO: stderr: "" May 13 21:34:01.336: INFO: stdout: "update-demo-nautilus-m9qc4 " May 13 21:34:01.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:01.452: INFO: stderr: "" May 13 21:34:01.452: INFO: stdout: "true" May 13 21:34:01.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:01.558: INFO: stderr: "" May 13 21:34:01.558: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:34:01.558: INFO: validating pod update-demo-nautilus-m9qc4 May 13 21:34:01.561: INFO: got data: { "image": "nautilus.jpg" } May 13 21:34:01.561: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:34:01.561: INFO: update-demo-nautilus-m9qc4 is verified up and running STEP: scaling up the replication controller May 13 21:34:01.563: INFO: scanned /root for discovery docs: May 13 21:34:01.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-91' May 13 21:34:02.686: INFO: stderr: "" May 13 21:34:02.686: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:34:02.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:34:02.790: INFO: stderr: "" May 13 21:34:02.790: INFO: stdout: "update-demo-nautilus-m9qc4 update-demo-nautilus-xkzwk " May 13 21:34:02.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:02.890: INFO: stderr: "" May 13 21:34:02.890: INFO: stdout: "true" May 13 21:34:02.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:02.981: INFO: stderr: "" May 13 21:34:02.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:34:02.981: INFO: validating pod update-demo-nautilus-m9qc4 May 13 21:34:02.984: INFO: got data: { "image": "nautilus.jpg" } May 13 21:34:02.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:34:02.984: INFO: update-demo-nautilus-m9qc4 is verified up and running May 13 21:34:02.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkzwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:03.150: INFO: stderr: "" May 13 21:34:03.150: INFO: stdout: "" May 13 21:34:03.150: INFO: update-demo-nautilus-xkzwk is created but not running May 13 21:34:08.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91' May 13 21:34:08.286: INFO: stderr: "" May 13 21:34:08.286: INFO: stdout: "update-demo-nautilus-m9qc4 update-demo-nautilus-xkzwk " May 13 21:34:08.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:08.484: INFO: stderr: "" May 13 21:34:08.484: INFO: stdout: "true" May 13 21:34:08.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9qc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:08.756: INFO: stderr: "" May 13 21:34:08.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:34:08.756: INFO: validating pod update-demo-nautilus-m9qc4 May 13 21:34:08.791: INFO: got data: { "image": "nautilus.jpg" } May 13 21:34:08.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:34:08.791: INFO: update-demo-nautilus-m9qc4 is verified up and running May 13 21:34:08.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkzwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:08.895: INFO: stderr: "" May 13 21:34:08.895: INFO: stdout: "true" May 13 21:34:08.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkzwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91' May 13 21:34:08.973: INFO: stderr: "" May 13 21:34:08.973: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 13 21:34:08.973: INFO: validating pod update-demo-nautilus-xkzwk May 13 21:34:08.977: INFO: got data: { "image": "nautilus.jpg" } May 13 21:34:08.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 21:34:08.977: INFO: update-demo-nautilus-xkzwk is verified up and running STEP: using delete to clean up resources May 13 21:34:08.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-91' May 13 21:34:09.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:34:09.164: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 13 21:34:09.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-91' May 13 21:34:09.253: INFO: stderr: "No resources found in kubectl-91 namespace.\n" May 13 21:34:09.253: INFO: stdout: "" May 13 21:34:09.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-91 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 21:34:09.519: INFO: stderr: "" May 13 21:34:09.519: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:09.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-91" for this suite. • [SLOW TEST:30.918 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":78,"skipped":1408,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:09.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:34:10.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3" in namespace "projected-4862" to be "success or failure" May 13 21:34:10.299: INFO: Pod "downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.018111ms May 13 21:34:12.303: INFO: Pod "downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045201928s May 13 21:34:14.308: INFO: Pod "downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050122644s STEP: Saw pod success May 13 21:34:14.308: INFO: Pod "downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3" satisfied condition "success or failure" May 13 21:34:14.311: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3 container client-container: STEP: delete the pod May 13 21:34:14.409: INFO: Waiting for pod downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3 to disappear May 13 21:34:14.417: INFO: Pod downwardapi-volume-df8c6076-42d4-489b-a634-d31fd0f9a5f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:14.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4862" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1415,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:14.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:18.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4098" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1427,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:18.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:34:18.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee" in namespace "downward-api-2062" to be "success or failure" May 13 21:34:18.718: INFO: Pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.246487ms May 13 21:34:20.721: INFO: Pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008938119s May 13 21:34:22.725: INFO: Pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee": Phase="Running", Reason="", readiness=true. Elapsed: 4.013077967s May 13 21:34:24.728: INFO: Pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015746493s STEP: Saw pod success May 13 21:34:24.728: INFO: Pod "downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee" satisfied condition "success or failure" May 13 21:34:24.730: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee container client-container: STEP: delete the pod May 13 21:34:24.816: INFO: Waiting for pod downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee to disappear May 13 21:34:24.832: INFO: Pod downwardapi-volume-c2e0f1ee-60a9-4a00-be1b-c6f5de2910ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:24.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2062" for this suite. • [SLOW TEST:6.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1446,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:24.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:29.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6426" for this suite. • [SLOW TEST:5.107 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":82,"skipped":1446,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:29.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:34:30.502: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:34:32.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002470, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002470, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002470, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002470, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:34:35.561: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:47.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7657" for this suite. STEP: Destroying namespace "webhook-7657-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.858 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":83,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:47.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 13 21:34:47.864: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-watch-closed 824dc5a6-b887-4737-92ca-90e6a6256b8e 15944273 0 2020-05-13 21:34:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 13 21:34:47.864: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-watch-closed 824dc5a6-b887-4737-92ca-90e6a6256b8e 15944274 0 2020-05-13 21:34:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 13 21:34:47.874: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-watch-closed 824dc5a6-b887-4737-92ca-90e6a6256b8e 15944275 0 2020-05-13 21:34:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 13 21:34:47.875: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-watch-closed 824dc5a6-b887-4737-92ca-90e6a6256b8e 15944276 0 2020-05-13 21:34:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:47.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7413" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":84,"skipped":1504,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:47.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:55.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3033" for this suite. • [SLOW TEST:7.089 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":85,"skipped":1504,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:34:55.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-705ebf20-2f73-403f-a504-d7341e6c6634 STEP: Creating a pod to test consume configMaps May 13 21:34:55.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51" in namespace "configmap-5517" to be "success or failure" May 13 21:34:55.356: INFO: Pod "pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51": Phase="Pending", Reason="", readiness=false. Elapsed: 37.516141ms May 13 21:34:57.366: INFO: Pod "pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047484579s May 13 21:34:59.371: INFO: Pod "pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051688311s STEP: Saw pod success May 13 21:34:59.371: INFO: Pod "pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51" satisfied condition "success or failure" May 13 21:34:59.373: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51 container configmap-volume-test: STEP: delete the pod May 13 21:34:59.918: INFO: Waiting for pod pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51 to disappear May 13 21:34:59.942: INFO: Pod pod-configmaps-032577ea-98df-492a-83f3-30ad2cc9eb51 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:34:59.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5517" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1510,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:35:00.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4644 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 13 21:35:00.349: INFO: Found 0 stateful pods, waiting for 3 May 13 21:35:10.354: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:35:10.354: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:35:10.354: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 13 21:35:20.355: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:35:20.355: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:35:20.355: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 13 21:35:20.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4644 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:35:23.753: INFO: stderr: "I0513 21:35:23.605552 2625 log.go:172] (0xc000117810) (0xc0006d2000) Create stream\nI0513 21:35:23.605582 2625 log.go:172] (0xc000117810) (0xc0006d2000) Stream added, broadcasting: 1\nI0513 21:35:23.607722 2625 log.go:172] (0xc000117810) Reply frame received for 1\nI0513 21:35:23.607747 2625 log.go:172] (0xc000117810) (0xc00067bcc0) Create stream\nI0513 21:35:23.607753 2625 log.go:172] (0xc000117810) (0xc00067bcc0) Stream added, broadcasting: 3\nI0513 21:35:23.608457 2625 log.go:172] (0xc000117810) Reply frame received for 3\nI0513 21:35:23.608480 2625 log.go:172] (0xc000117810) (0xc0006d20a0) Create stream\nI0513 21:35:23.608489 2625 log.go:172] (0xc000117810) (0xc0006d20a0) Stream added, broadcasting: 5\nI0513 21:35:23.609414 2625 log.go:172] (0xc000117810) Reply frame received for 5\nI0513 21:35:23.707665 2625 log.go:172] (0xc000117810) Data frame received for 5\nI0513 21:35:23.707686 2625 log.go:172] (0xc0006d20a0) (5) Data frame handling\nI0513 21:35:23.707705 2625 log.go:172] (0xc0006d20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:35:23.744835 2625 log.go:172] (0xc000117810) Data frame received for 3\nI0513 21:35:23.744872 2625 log.go:172] (0xc00067bcc0) (3) Data frame handling\nI0513 21:35:23.744892 2625 log.go:172] (0xc00067bcc0) (3) Data frame sent\nI0513 21:35:23.745079 2625 log.go:172] (0xc000117810) Data frame received for 5\nI0513 21:35:23.745102 2625 log.go:172] (0xc0006d20a0) (5) Data frame handling\nI0513 21:35:23.745294 2625 log.go:172] (0xc000117810) Data frame received for 3\nI0513 21:35:23.745314 2625 log.go:172] (0xc00067bcc0) (3) Data frame handling\nI0513 21:35:23.746959 2625 log.go:172] (0xc000117810) Data frame received for 1\nI0513 21:35:23.746999 2625 log.go:172] (0xc0006d2000) (1) Data frame handling\nI0513 21:35:23.747040 2625 log.go:172] (0xc0006d2000) (1) Data frame sent\nI0513 21:35:23.747065 2625 log.go:172] (0xc000117810) (0xc0006d2000) Stream removed, broadcasting: 1\nI0513 21:35:23.747083 2625 log.go:172] (0xc000117810) Go away received\nI0513 21:35:23.747320 2625 log.go:172] (0xc000117810) (0xc0006d2000) Stream removed, broadcasting: 1\nI0513 21:35:23.747337 2625 log.go:172] (0xc000117810) (0xc00067bcc0) Stream removed, broadcasting: 3\nI0513 21:35:23.747342 2625 log.go:172] (0xc000117810) (0xc0006d20a0) Stream removed, broadcasting: 5\n" May 13 21:35:23.753: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:35:23.753: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 13 21:35:33.782: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 13 21:35:43.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4644 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:35:44.067: INFO: stderr: "I0513 21:35:43.973869 2653 log.go:172] (0xc00099cbb0) (0xc0007dc000) Create stream\nI0513 21:35:43.973927 2653 log.go:172] (0xc00099cbb0) (0xc0007dc000) Stream added, broadcasting: 1\nI0513 21:35:43.975541 2653 log.go:172] (0xc00099cbb0) Reply frame received for 1\nI0513 21:35:43.975591 2653 log.go:172] (0xc00099cbb0) (0xc0008e2000) Create stream\nI0513 21:35:43.975601 2653 log.go:172] (0xc00099cbb0) (0xc0008e2000) Stream added, broadcasting: 3\nI0513 21:35:43.976427 2653 log.go:172] (0xc00099cbb0) Reply frame received for 3\nI0513 21:35:43.976458 2653 log.go:172] (0xc00099cbb0) (0xc0008e20a0) Create stream\nI0513 21:35:43.976467 2653 log.go:172] (0xc00099cbb0) (0xc0008e20a0) Stream added, broadcasting: 5\nI0513 21:35:43.977463 2653 log.go:172] (0xc00099cbb0) Reply frame received for 5\nI0513 21:35:44.059686 2653 log.go:172] (0xc00099cbb0) Data frame received for 5\nI0513 21:35:44.059727 2653 log.go:172] (0xc0008e20a0) (5) Data frame handling\nI0513 21:35:44.059739 2653 log.go:172] (0xc0008e20a0) (5) Data frame sent\nI0513 21:35:44.059747 2653 log.go:172] (0xc00099cbb0) Data frame received for 5\nI0513 21:35:44.059754 2653 log.go:172] (0xc0008e20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:35:44.059776 2653 log.go:172] (0xc00099cbb0) Data frame received for 3\nI0513 21:35:44.059784 2653 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0513 21:35:44.059792 2653 log.go:172] (0xc0008e2000) (3) Data frame sent\nI0513 21:35:44.059957 2653 log.go:172] (0xc00099cbb0) Data frame received for 3\nI0513 21:35:44.060001 2653 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0513 21:35:44.061500 2653 log.go:172] (0xc00099cbb0) Data frame received for 1\nI0513 21:35:44.061511 2653 log.go:172] (0xc0007dc000) (1) Data frame handling\nI0513 21:35:44.061517 2653 log.go:172] (0xc0007dc000) (1) Data frame sent\nI0513 21:35:44.061524 2653 log.go:172] (0xc00099cbb0) (0xc0007dc000) Stream removed, broadcasting: 1\nI0513 21:35:44.061734 2653 log.go:172] (0xc00099cbb0) Go away received\nI0513 21:35:44.061784 2653 log.go:172] (0xc00099cbb0) (0xc0007dc000) Stream removed, broadcasting: 1\nI0513 21:35:44.061814 2653 log.go:172] (0xc00099cbb0) (0xc0008e2000) Stream removed, broadcasting: 3\nI0513 21:35:44.061838 2653 log.go:172] (0xc00099cbb0) (0xc0008e20a0) Stream removed, broadcasting: 5\n" May 13 21:35:44.067: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:35:44.067: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:36:04.088: INFO: Waiting for StatefulSet statefulset-4644/ss2 to complete update May 13 21:36:04.088: INFO: Waiting for Pod statefulset-4644/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 13 21:36:14.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4644 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:36:14.347: INFO: stderr: "I0513 21:36:14.221088 2673 log.go:172] (0xc000a7a000) (0xc000ae60a0) Create stream\nI0513 21:36:14.221351 2673 log.go:172] (0xc000a7a000) (0xc000ae60a0) Stream added, broadcasting: 1\nI0513 21:36:14.223193 2673 log.go:172] (0xc000a7a000) Reply frame received for 1\nI0513 21:36:14.223248 2673 log.go:172] (0xc000a7a000) (0xc0009cc000) Create stream\nI0513 21:36:14.223276 2673 log.go:172] (0xc000a7a000) (0xc0009cc000) Stream added, broadcasting: 3\nI0513 21:36:14.224399 2673 log.go:172] (0xc000a7a000) Reply frame received for 3\nI0513 21:36:14.224435 2673 log.go:172] (0xc000a7a000) (0xc000b043c0) Create stream\nI0513 21:36:14.224475 2673 log.go:172] (0xc000a7a000) (0xc000b043c0) Stream added, broadcasting: 5\nI0513 21:36:14.225454 2673 log.go:172] (0xc000a7a000) Reply frame received for 5\nI0513 21:36:14.308751 2673 log.go:172] (0xc000a7a000) Data frame received for 5\nI0513 21:36:14.308777 2673 log.go:172] (0xc000b043c0) (5) Data frame handling\nI0513 21:36:14.308800 2673 log.go:172] (0xc000b043c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:36:14.338514 2673 log.go:172] (0xc000a7a000) Data frame received for 3\nI0513 21:36:14.338548 2673 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0513 21:36:14.338568 2673 log.go:172] (0xc0009cc000) (3) Data frame sent\nI0513 21:36:14.338911 2673 log.go:172] (0xc000a7a000) Data frame received for 3\nI0513 21:36:14.338963 2673 log.go:172] (0xc0009cc000) (3) Data frame handling\nI0513 21:36:14.339082 2673 log.go:172] (0xc000a7a000) Data frame received for 5\nI0513 21:36:14.339095 2673 log.go:172] (0xc000b043c0) (5) Data frame handling\nI0513 21:36:14.341012 2673 log.go:172] (0xc000a7a000) Data frame received for 1\nI0513 21:36:14.341057 2673 log.go:172] (0xc000ae60a0) (1) Data frame handling\nI0513 21:36:14.341095 2673 log.go:172] (0xc000ae60a0) (1) Data frame sent\nI0513 21:36:14.341338 2673 log.go:172] (0xc000a7a000) (0xc000ae60a0) Stream removed, broadcasting: 1\nI0513 21:36:14.341376 2673 log.go:172] (0xc000a7a000) Go away received\nI0513 21:36:14.341749 2673 log.go:172] (0xc000a7a000) (0xc000ae60a0) Stream removed, broadcasting: 1\nI0513 21:36:14.341787 2673 log.go:172] (0xc000a7a000) (0xc0009cc000) Stream removed, broadcasting: 3\nI0513 21:36:14.341803 2673 log.go:172] (0xc000a7a000) (0xc000b043c0) Stream removed, broadcasting: 5\n" May 13 21:36:14.348: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:36:14.348: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:36:24.375: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 13 21:36:34.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4644 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:36:34.743: INFO: stderr: "I0513 21:36:34.634127 2692 log.go:172] (0xc0009f0000) (0xc0005e2640) Create stream\nI0513 21:36:34.634178 2692 log.go:172] (0xc0009f0000) (0xc0005e2640) Stream added, broadcasting: 1\nI0513 21:36:34.636724 2692 log.go:172] (0xc0009f0000) Reply frame received for 1\nI0513 21:36:34.636758 2692 log.go:172] (0xc0009f0000) (0xc00078b400) Create stream\nI0513 21:36:34.636777 2692 log.go:172] (0xc0009f0000) (0xc00078b400) Stream added, broadcasting: 3\nI0513 21:36:34.637973 2692 log.go:172] (0xc0009f0000) Reply frame received for 3\nI0513 21:36:34.638022 2692 log.go:172] (0xc0009f0000) (0xc0009c2000) Create stream\nI0513 21:36:34.638039 2692 log.go:172] (0xc0009f0000) (0xc0009c2000) Stream added, broadcasting: 5\nI0513 21:36:34.638865 2692 log.go:172] (0xc0009f0000) Reply frame received for 5\nI0513 21:36:34.739144 2692 log.go:172] (0xc0009f0000) Data frame received for 3\nI0513 21:36:34.739174 2692 log.go:172] (0xc00078b400) (3) Data frame handling\nI0513 21:36:34.739185 2692 log.go:172] (0xc00078b400) (3) Data frame sent\nI0513 21:36:34.739192 2692 log.go:172] (0xc0009f0000) Data frame received for 3\nI0513 21:36:34.739196 2692 log.go:172] (0xc00078b400) (3) Data frame handling\nI0513 21:36:34.739216 2692 log.go:172] (0xc0009f0000) Data frame received for 5\nI0513 21:36:34.739222 2692 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0513 21:36:34.739227 2692 log.go:172] (0xc0009c2000) (5) Data frame sent\nI0513 21:36:34.739239 2692 log.go:172] (0xc0009f0000) Data frame received for 5\nI0513 21:36:34.739243 2692 log.go:172] (0xc0009c2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:36:34.739891 2692 log.go:172] (0xc0009f0000) Data frame received for 1\nI0513 21:36:34.739905 2692 log.go:172] (0xc0005e2640) (1) Data frame handling\nI0513 21:36:34.739913 2692 log.go:172] (0xc0005e2640) (1) Data frame sent\nI0513 21:36:34.739923 2692 log.go:172] (0xc0009f0000) (0xc0005e2640) Stream removed, broadcasting: 1\nI0513 21:36:34.739932 2692 log.go:172] (0xc0009f0000) Go away received\nI0513 21:36:34.740145 2692 log.go:172] (0xc0009f0000) (0xc0005e2640) Stream removed, broadcasting: 1\nI0513 21:36:34.740156 2692 log.go:172] (0xc0009f0000) (0xc00078b400) Stream removed, broadcasting: 3\nI0513 21:36:34.740161 2692 log.go:172] (0xc0009f0000) (0xc0009c2000) Stream removed, broadcasting: 5\n" May 13 21:36:34.743: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:36:34.743: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:36:54.820: INFO: Waiting for StatefulSet statefulset-4644/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 21:37:04.849: INFO: Deleting all statefulset in ns statefulset-4644 May 13 21:37:04.851: INFO: Scaling statefulset ss2 to 0 May 13 21:37:24.875: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:37:24.886: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:37:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4644" for this suite. • [SLOW TEST:144.905 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":87,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:37:24.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e60b44ed-80ea-4377-97a8-1fb34491b236 STEP: Creating a pod to test consume secrets May 13 21:37:25.035: INFO: Waiting up to 5m0s for pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1" in namespace "secrets-1584" to be "success or failure" May 13 21:37:25.039: INFO: Pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377875ms May 13 21:37:27.176: INFO: Pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141167422s May 13 21:37:29.179: INFO: Pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1": Phase="Running", Reason="", readiness=true. Elapsed: 4.144943081s May 13 21:37:31.183: INFO: Pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148842349s STEP: Saw pod success May 13 21:37:31.183: INFO: Pod "pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1" satisfied condition "success or failure" May 13 21:37:31.187: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1 container secret-volume-test: STEP: delete the pod May 13 21:37:31.251: INFO: Waiting for pod pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1 to disappear May 13 21:37:31.279: INFO: Pod pod-secrets-9314828c-9d64-4b47-8851-1d8c77dc7cf1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:37:31.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1584" for this suite. • [SLOW TEST:6.386 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1537,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:37:31.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 13 21:37:31.389: INFO: Waiting up to 5m0s for pod "pod-77e1a920-eb67-4c89-a393-06f37fadc65f" in namespace "emptydir-6224" to be "success or failure" May 13 21:37:31.475: INFO: Pod "pod-77e1a920-eb67-4c89-a393-06f37fadc65f": Phase="Pending", Reason="", readiness=false. Elapsed: 85.321574ms May 13 21:37:33.552: INFO: Pod "pod-77e1a920-eb67-4c89-a393-06f37fadc65f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163053065s May 13 21:37:35.556: INFO: Pod "pod-77e1a920-eb67-4c89-a393-06f37fadc65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166641262s STEP: Saw pod success May 13 21:37:35.556: INFO: Pod "pod-77e1a920-eb67-4c89-a393-06f37fadc65f" satisfied condition "success or failure" May 13 21:37:35.559: INFO: Trying to get logs from node jerma-worker pod pod-77e1a920-eb67-4c89-a393-06f37fadc65f container test-container: STEP: delete the pod May 13 21:37:35.811: INFO: Waiting for pod pod-77e1a920-eb67-4c89-a393-06f37fadc65f to disappear May 13 21:37:36.067: INFO: Pod pod-77e1a920-eb67-4c89-a393-06f37fadc65f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:37:36.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6224" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1537,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:37:36.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1053 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 21:37:36.314: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 13 21:38:02.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.187:8080/dial?request=hostname&protocol=udp&host=10.244.1.74&port=8081&tries=1'] Namespace:pod-network-test-1053 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:38:02.465: INFO: >>> kubeConfig: /root/.kube/config I0513 21:38:02.496258 6 log.go:172] (0xc002534370) (0xc0023220a0) Create stream I0513 21:38:02.496280 6 log.go:172] (0xc002534370) (0xc0023220a0) Stream added, broadcasting: 1 I0513 21:38:02.497981 6 log.go:172] (0xc002534370) Reply frame received for 1 I0513 21:38:02.498023 6 log.go:172] (0xc002534370) (0xc002322140) Create stream I0513 21:38:02.498039 6 log.go:172] (0xc002534370) (0xc002322140) Stream added, broadcasting: 3 I0513 21:38:02.498882 6 log.go:172] (0xc002534370) Reply frame received for 3 I0513 21:38:02.498936 6 log.go:172] (0xc002534370) (0xc002322280) Create stream I0513 21:38:02.498958 6 log.go:172] (0xc002534370) (0xc002322280) Stream added, broadcasting: 5 I0513 21:38:02.499689 6 log.go:172] (0xc002534370) Reply frame received for 5 I0513 21:38:02.565568 6 log.go:172] (0xc002534370) Data frame received for 3 I0513 21:38:02.565604 6 log.go:172] (0xc002322140) (3) Data frame handling I0513 21:38:02.565637 6 log.go:172] (0xc002322140) (3) Data frame sent I0513 21:38:02.566411 6 log.go:172] (0xc002534370) Data frame received for 3 I0513 21:38:02.566430 6 log.go:172] (0xc002322140) (3) Data frame handling I0513 21:38:02.566697 6 log.go:172] (0xc002534370) Data frame received for 5 I0513 21:38:02.566722 6 log.go:172] (0xc002322280) (5) Data frame handling I0513 21:38:02.568020 6 log.go:172] (0xc002534370) Data frame received for 1 I0513 21:38:02.568088 6 log.go:172] (0xc0023220a0) (1) Data frame handling I0513 21:38:02.568119 6 log.go:172] (0xc0023220a0) (1) Data frame sent I0513 21:38:02.568139 6 log.go:172] (0xc002534370) (0xc0023220a0) Stream removed, broadcasting: 1 I0513 21:38:02.568254 6 log.go:172] (0xc002534370) Go away received I0513 21:38:02.568426 6 log.go:172] (0xc002534370) (0xc0023220a0) Stream removed, broadcasting: 1 I0513 21:38:02.568457 6 log.go:172] (0xc002534370) (0xc002322140) Stream removed, broadcasting: 3 I0513 21:38:02.568467 6 log.go:172] (0xc002534370) (0xc002322280) Stream removed, broadcasting: 5 May 13 21:38:02.568: INFO: Waiting for responses: map[] May 13 21:38:02.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.187:8080/dial?request=hostname&protocol=udp&host=10.244.2.186&port=8081&tries=1'] Namespace:pod-network-test-1053 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:38:02.571: INFO: >>> kubeConfig: /root/.kube/config I0513 21:38:02.600203 6 log.go:172] (0xc001f82370) (0xc0026932c0) Create stream I0513 21:38:02.600220 6 log.go:172] (0xc001f82370) (0xc0026932c0) Stream added, broadcasting: 1 I0513 21:38:02.614056 6 log.go:172] (0xc001f82370) Reply frame received for 1 I0513 21:38:02.614128 6 log.go:172] (0xc001f82370) (0xc002973ea0) Create stream I0513 21:38:02.614154 6 log.go:172] (0xc001f82370) (0xc002973ea0) Stream added, broadcasting: 3 I0513 21:38:02.615211 6 log.go:172] (0xc001f82370) Reply frame received for 3 I0513 21:38:02.615237 6 log.go:172] (0xc001f82370) (0xc0023228c0) Create stream I0513 21:38:02.615248 6 log.go:172] (0xc001f82370) (0xc0023228c0) Stream added, broadcasting: 5 I0513 21:38:02.616015 6 log.go:172] (0xc001f82370) Reply frame received for 5 I0513 21:38:02.681747 6 log.go:172] (0xc001f82370) Data frame received for 3 I0513 21:38:02.681778 6 log.go:172] (0xc002973ea0) (3) Data frame handling I0513 21:38:02.681793 6 log.go:172] (0xc002973ea0) (3) Data frame sent I0513 21:38:02.682064 6 log.go:172] (0xc001f82370) Data frame received for 5 I0513 21:38:02.682093 6 log.go:172] (0xc0023228c0) (5) Data frame handling I0513 21:38:02.682163 6 log.go:172] (0xc001f82370) Data frame received for 3 I0513 21:38:02.682174 6 log.go:172] (0xc002973ea0) (3) Data frame handling I0513 21:38:02.684203 6 log.go:172] (0xc001f82370) Data frame received for 1 I0513 21:38:02.684220 6 log.go:172] (0xc0026932c0) (1) Data frame handling I0513 21:38:02.684234 6 log.go:172] (0xc0026932c0) (1) Data frame sent I0513 21:38:02.684258 6 log.go:172] (0xc001f82370) (0xc0026932c0) Stream removed, broadcasting: 1 I0513 21:38:02.684289 6 log.go:172] (0xc001f82370) Go away received I0513 21:38:02.684405 6 log.go:172] (0xc001f82370) (0xc0026932c0) Stream removed, broadcasting: 1 I0513 21:38:02.684432 6 log.go:172] (0xc001f82370) (0xc002973ea0) Stream removed, broadcasting: 3 I0513 21:38:02.684447 6 log.go:172] (0xc001f82370) (0xc0023228c0) Stream removed, broadcasting: 5 May 13 21:38:02.684: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:02.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1053" for this suite. • [SLOW TEST:26.614 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1545,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:02.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 13 21:38:02.839: INFO: Waiting up to 5m0s for pod "pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1" in namespace "emptydir-6421" to be "success or failure" May 13 21:38:02.843: INFO: Pod "pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.701125ms May 13 21:38:04.847: INFO: Pod "pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007595081s May 13 21:38:06.865: INFO: Pod "pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025470678s STEP: Saw pod success May 13 21:38:06.865: INFO: Pod "pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1" satisfied condition "success or failure" May 13 21:38:06.867: INFO: Trying to get logs from node jerma-worker pod pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1 container test-container: STEP: delete the pod May 13 21:38:06.917: INFO: Waiting for pod pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1 to disappear May 13 21:38:07.068: INFO: Pod pod-e566cc40-5a58-46bc-b2e0-96d55aa6b1d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:07.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6421" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1560,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:07.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7607 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7607 I0513 21:38:07.350111 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7607, replica count: 2 I0513 21:38:10.401046 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:38:13.401341 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 21:38:13.401: INFO: Creating new exec pod May 13 21:38:18.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpodpv8b9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 13 21:38:18.653: INFO: stderr: "I0513 21:38:18.563004 2715 log.go:172] (0xc000404790) (0xc0008c8140) Create stream\nI0513 21:38:18.563146 2715 log.go:172] (0xc000404790) (0xc0008c8140) Stream added, broadcasting: 1\nI0513 21:38:18.565542 2715 log.go:172] (0xc000404790) Reply frame received for 1\nI0513 21:38:18.565576 2715 log.go:172] (0xc000404790) (0xc000627a40) Create stream\nI0513 21:38:18.565601 2715 log.go:172] (0xc000404790) (0xc000627a40) Stream added, broadcasting: 3\nI0513 21:38:18.566349 2715 log.go:172] (0xc000404790) Reply frame received for 3\nI0513 21:38:18.566381 2715 log.go:172] (0xc000404790) (0xc0002a5400) Create stream\nI0513 21:38:18.566395 2715 log.go:172] (0xc000404790) (0xc0002a5400) Stream added, broadcasting: 5\nI0513 21:38:18.567140 2715 log.go:172] (0xc000404790) Reply frame received for 5\nI0513 21:38:18.646791 2715 log.go:172] (0xc000404790) Data frame received for 5\nI0513 21:38:18.646813 2715 log.go:172] (0xc0002a5400) (5) Data frame handling\nI0513 21:38:18.646825 2715 log.go:172] (0xc0002a5400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0513 21:38:18.647309 2715 log.go:172] (0xc000404790) Data frame received for 5\nI0513 21:38:18.647331 2715 log.go:172] (0xc0002a5400) (5) Data frame handling\nI0513 21:38:18.647361 2715 log.go:172] (0xc0002a5400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0513 21:38:18.647639 2715 log.go:172] (0xc000404790) Data frame received for 5\nI0513 21:38:18.647658 2715 log.go:172] (0xc0002a5400) (5) Data frame handling\nI0513 21:38:18.647841 2715 log.go:172] (0xc000404790) Data frame received for 3\nI0513 21:38:18.647863 2715 log.go:172] (0xc000627a40) (3) Data frame handling\nI0513 21:38:18.649068 2715 log.go:172] (0xc000404790) Data frame received for 1\nI0513 21:38:18.649094 2715 log.go:172] (0xc0008c8140) (1) Data frame handling\nI0513 21:38:18.649237 2715 log.go:172] (0xc0008c8140) (1) Data frame sent\nI0513 21:38:18.649315 2715 log.go:172] (0xc000404790) (0xc0008c8140) Stream removed, broadcasting: 1\nI0513 21:38:18.649349 2715 log.go:172] (0xc000404790) Go away received\nI0513 21:38:18.649923 2715 log.go:172] (0xc000404790) (0xc0008c8140) Stream removed, broadcasting: 1\nI0513 21:38:18.649947 2715 log.go:172] (0xc000404790) (0xc000627a40) Stream removed, broadcasting: 3\nI0513 21:38:18.649963 2715 log.go:172] (0xc000404790) (0xc0002a5400) Stream removed, broadcasting: 5\n" May 13 21:38:18.653: INFO: stdout: "" May 13 21:38:18.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpodpv8b9 -- /bin/sh -x -c nc -zv -t -w 2 10.105.154.150 80' May 13 21:38:18.847: INFO: stderr: "I0513 21:38:18.771475 2737 log.go:172] (0xc000924b00) (0xc0005d2320) Create stream\nI0513 21:38:18.771538 2737 log.go:172] (0xc000924b00) (0xc0005d2320) Stream added, broadcasting: 1\nI0513 21:38:18.773450 2737 log.go:172] (0xc000924b00) Reply frame received for 1\nI0513 21:38:18.773483 2737 log.go:172] (0xc000924b00) (0xc0005d23c0) Create stream\nI0513 21:38:18.773497 2737 log.go:172] (0xc000924b00) (0xc0005d23c0) Stream added, broadcasting: 3\nI0513 21:38:18.774344 2737 log.go:172] (0xc000924b00) Reply frame received for 3\nI0513 21:38:18.774378 2737 log.go:172] (0xc000924b00) (0xc0008d6000) Create stream\nI0513 21:38:18.774388 2737 log.go:172] (0xc000924b00) (0xc0008d6000) Stream added, broadcasting: 5\nI0513 21:38:18.775099 2737 log.go:172] (0xc000924b00) Reply frame received for 5\nI0513 21:38:18.843335 2737 log.go:172] (0xc000924b00) Data frame received for 5\nI0513 21:38:18.843354 2737 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0513 21:38:18.843361 2737 log.go:172] (0xc0008d6000) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.154.150 80\nConnection to 10.105.154.150 80 port [tcp/http] succeeded!\nI0513 21:38:18.843375 2737 log.go:172] (0xc000924b00) Data frame received for 3\nI0513 21:38:18.843396 2737 log.go:172] (0xc0005d23c0) (3) Data frame handling\nI0513 21:38:18.843420 2737 log.go:172] (0xc000924b00) Data frame received for 5\nI0513 21:38:18.843430 2737 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0513 21:38:18.844221 2737 log.go:172] (0xc000924b00) Data frame received for 1\nI0513 21:38:18.844231 2737 log.go:172] (0xc0005d2320) (1) Data frame handling\nI0513 21:38:18.844239 2737 log.go:172] (0xc0005d2320) (1) Data frame sent\nI0513 21:38:18.844246 2737 log.go:172] (0xc000924b00) (0xc0005d2320) Stream removed, broadcasting: 1\nI0513 21:38:18.844278 2737 log.go:172] (0xc000924b00) Go away received\nI0513 21:38:18.844541 2737 log.go:172] (0xc000924b00) (0xc0005d2320) Stream removed, broadcasting: 1\nI0513 21:38:18.844554 2737 log.go:172] (0xc000924b00) (0xc0005d23c0) Stream removed, broadcasting: 3\nI0513 21:38:18.844562 2737 log.go:172] (0xc000924b00) (0xc0008d6000) Stream removed, broadcasting: 5\n" May 13 21:38:18.847: INFO: stdout: "" May 13 21:38:18.847: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:18.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7607" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.950 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":92,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:19.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:24.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1246" for this suite. • [SLOW TEST:5.344 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":93,"skipped":1600,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:24.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d86ee38a-617c-468c-8963-38af02666733 STEP: Creating a pod to test consume configMaps May 13 21:38:24.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7" in namespace "configmap-6386" to be "success or failure" May 13 21:38:24.926: INFO: Pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.880954ms May 13 21:38:27.003: INFO: Pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099767656s May 13 21:38:29.006: INFO: Pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103348356s May 13 21:38:31.010: INFO: Pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107069548s STEP: Saw pod success May 13 21:38:31.010: INFO: Pod "pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7" satisfied condition "success or failure" May 13 21:38:31.013: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7 container configmap-volume-test: STEP: delete the pod May 13 21:38:31.058: INFO: Waiting for pod pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7 to disappear May 13 21:38:31.065: INFO: Pod pod-configmaps-eb4dd8d0-190a-462e-bf89-1467866298c7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:31.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6386" for this suite. • [SLOW TEST:6.704 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1606,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:31.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:38:31.477: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:38:33.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002711, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002711, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002711, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002711, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:38:36.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:38:36.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:37.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2354" for this suite. STEP: Destroying namespace "webhook-2354-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.880 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":95,"skipped":1606,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:37.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:38:44.513: INFO: DNS probes using dns-9815/dns-test-b91668f7-c1e6-440a-96e0-c0c735c5c662 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:38:44.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9815" for this suite. • [SLOW TEST:6.653 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":96,"skipped":1626,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:38:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:38:45.243: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 13 21:38:45.255: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:45.308: INFO: Number of nodes with available pods: 0 May 13 21:38:45.308: INFO: Node jerma-worker is running more than one daemon pod May 13 21:38:46.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:46.316: INFO: Number of nodes with available pods: 0 May 13 21:38:46.316: INFO: Node jerma-worker is running more than one daemon pod May 13 21:38:47.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:47.646: INFO: Number of nodes with available pods: 0 May 13 21:38:47.646: INFO: Node jerma-worker is running more than one daemon pod May 13 21:38:48.312: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:48.316: INFO: Number of nodes with available pods: 0 May 13 21:38:48.316: INFO: Node jerma-worker is running more than one daemon pod May 13 21:38:49.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:49.317: INFO: Number of nodes with available pods: 0 May 13 21:38:49.317: INFO: Node jerma-worker is running more than one daemon pod May 13 21:38:50.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:50.322: INFO: Number of nodes with available pods: 1 May 13 21:38:50.322: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:38:51.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:51.316: INFO: Number of nodes with available pods: 2 May 13 21:38:51.316: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 13 21:38:51.351: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:51.351: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:51.377: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:52.381: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:52.381: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:52.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:53.382: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:53.382: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:53.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:54.382: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:54.382: INFO: Pod daemon-set-h5qs6 is not available May 13 21:38:54.382: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:54.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:55.380: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:55.380: INFO: Pod daemon-set-h5qs6 is not available May 13 21:38:55.380: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:55.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:56.382: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:56.382: INFO: Pod daemon-set-h5qs6 is not available May 13 21:38:56.382: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:56.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:57.381: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:57.381: INFO: Pod daemon-set-h5qs6 is not available May 13 21:38:57.381: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:57.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:58.380: INFO: Wrong image for pod: daemon-set-h5qs6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:58.380: INFO: Pod daemon-set-h5qs6 is not available May 13 21:38:58.380: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:58.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:38:59.380: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:38:59.380: INFO: Pod daemon-set-sh4l7 is not available May 13 21:38:59.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:00.384: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:00.384: INFO: Pod daemon-set-sh4l7 is not available May 13 21:39:00.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:01.382: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:01.382: INFO: Pod daemon-set-sh4l7 is not available May 13 21:39:01.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:02.380: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:02.381: INFO: Pod daemon-set-sh4l7 is not available May 13 21:39:02.384: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:03.381: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:03.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:04.381: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:04.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:05.381: INFO: Wrong image for pod: daemon-set-s2wmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 13 21:39:05.381: INFO: Pod daemon-set-s2wmf is not available May 13 21:39:05.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:06.381: INFO: Pod daemon-set-2cnvq is not available May 13 21:39:06.384: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 13 21:39:06.388: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:06.390: INFO: Number of nodes with available pods: 1 May 13 21:39:06.390: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:39:07.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:07.459: INFO: Number of nodes with available pods: 1 May 13 21:39:07.459: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:39:08.415: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:08.463: INFO: Number of nodes with available pods: 1 May 13 21:39:08.463: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:39:09.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:39:09.400: INFO: Number of nodes with available pods: 2 May 13 21:39:09.400: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8151, will wait for the garbage collector to delete the pods May 13 21:39:09.472: INFO: Deleting DaemonSet.extensions daemon-set took: 6.009915ms May 13 21:39:09.772: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.22471ms May 13 21:39:19.276: INFO: Number of nodes with available pods: 0 May 13 21:39:19.276: INFO: Number of running nodes: 0, number of available pods: 0 May 13 21:39:19.296: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8151/daemonsets","resourceVersion":"15945949"},"items":null} May 13 21:39:19.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8151/pods","resourceVersion":"15945949"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:19.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8151" for this suite. • [SLOW TEST:34.706 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":97,"skipped":1631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:19.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:39:19.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232" in namespace "projected-9693" to be "success or failure" May 13 21:39:19.382: INFO: Pod "downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.800415ms May 13 21:39:21.488: INFO: Pod "downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109045234s May 13 21:39:23.492: INFO: Pod "downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112903769s STEP: Saw pod success May 13 21:39:23.492: INFO: Pod "downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232" satisfied condition "success or failure" May 13 21:39:23.501: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232 container client-container: STEP: delete the pod May 13 21:39:23.562: INFO: Waiting for pod downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232 to disappear May 13 21:39:23.568: INFO: Pod downwardapi-volume-09a26de5-2180-4231-83a5-f57589100232 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:23.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9693" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1671,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:23.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:39:23.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0" in namespace "downward-api-6539" to be "success or failure" May 13 21:39:23.640: INFO: Pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.154233ms May 13 21:39:25.660: INFO: Pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023567685s May 13 21:39:27.666: INFO: Pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.029929099s May 13 21:39:29.671: INFO: Pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034216262s STEP: Saw pod success May 13 21:39:29.671: INFO: Pod "downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0" satisfied condition "success or failure" May 13 21:39:29.674: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0 container client-container: STEP: delete the pod May 13 21:39:29.712: INFO: Waiting for pod downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0 to disappear May 13 21:39:29.727: INFO: Pod downwardapi-volume-f04ba455-8823-4110-bbd2-912cb902a6d0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:29.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6539" for this suite. • [SLOW TEST:6.160 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:29.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 13 21:39:29.805: INFO: Waiting up to 5m0s for pod "pod-1bf42df7-443d-4b29-8923-7841a51c3838" in namespace "emptydir-3657" to be "success or failure" May 13 21:39:29.811: INFO: Pod "pod-1bf42df7-443d-4b29-8923-7841a51c3838": Phase="Pending", Reason="", readiness=false. Elapsed: 5.715475ms May 13 21:39:31.814: INFO: Pod "pod-1bf42df7-443d-4b29-8923-7841a51c3838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009494165s May 13 21:39:33.818: INFO: Pod "pod-1bf42df7-443d-4b29-8923-7841a51c3838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013349062s STEP: Saw pod success May 13 21:39:33.818: INFO: Pod "pod-1bf42df7-443d-4b29-8923-7841a51c3838" satisfied condition "success or failure" May 13 21:39:33.822: INFO: Trying to get logs from node jerma-worker2 pod pod-1bf42df7-443d-4b29-8923-7841a51c3838 container test-container: STEP: delete the pod May 13 21:39:33.862: INFO: Waiting for pod pod-1bf42df7-443d-4b29-8923-7841a51c3838 to disappear May 13 21:39:33.870: INFO: Pod pod-1bf42df7-443d-4b29-8923-7841a51c3838 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:33.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3657" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1722,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:33.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 13 21:39:33.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-690 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 13 21:39:36.898: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0513 21:39:36.811079 2756 log.go:172] (0xc0009440b0) (0xc000ae2140) Create stream\nI0513 21:39:36.811133 2756 log.go:172] (0xc0009440b0) (0xc000ae2140) Stream added, broadcasting: 1\nI0513 21:39:36.820934 2756 log.go:172] (0xc0009440b0) Reply frame received for 1\nI0513 21:39:36.821023 2756 log.go:172] (0xc0009440b0) (0xc000ae41e0) Create stream\nI0513 21:39:36.821060 2756 log.go:172] (0xc0009440b0) (0xc000ae41e0) Stream added, broadcasting: 3\nI0513 21:39:36.823472 2756 log.go:172] (0xc0009440b0) Reply frame received for 3\nI0513 21:39:36.823532 2756 log.go:172] (0xc0009440b0) (0xc000ae4320) Create stream\nI0513 21:39:36.823558 2756 log.go:172] (0xc0009440b0) (0xc000ae4320) Stream added, broadcasting: 5\nI0513 21:39:36.824716 2756 log.go:172] (0xc0009440b0) Reply frame received for 5\nI0513 21:39:36.824754 2756 log.go:172] (0xc0009440b0) (0xc000ae21e0) Create stream\nI0513 21:39:36.824765 2756 log.go:172] (0xc0009440b0) (0xc000ae21e0) Stream added, broadcasting: 7\nI0513 21:39:36.825790 2756 log.go:172] (0xc0009440b0) Reply frame received for 7\nI0513 21:39:36.826002 2756 log.go:172] (0xc000ae41e0) (3) Writing data frame\nI0513 21:39:36.826452 2756 log.go:172] (0xc000ae41e0) (3) Writing data frame\nI0513 21:39:36.827653 2756 log.go:172] (0xc0009440b0) Data frame received for 5\nI0513 21:39:36.827669 2756 log.go:172] (0xc000ae4320) (5) Data frame handling\nI0513 21:39:36.827685 2756 log.go:172] (0xc000ae4320) (5) Data frame sent\nI0513 21:39:36.829653 2756 log.go:172] (0xc0009440b0) Data frame received for 5\nI0513 21:39:36.829663 2756 log.go:172] (0xc000ae4320) (5) Data frame handling\nI0513 21:39:36.829673 2756 log.go:172] (0xc000ae4320) (5) Data frame sent\nI0513 21:39:36.879423 2756 log.go:172] (0xc0009440b0) Data frame received for 7\nI0513 21:39:36.879458 2756 log.go:172] (0xc000ae21e0) (7) Data frame handling\nI0513 21:39:36.879477 2756 log.go:172] (0xc0009440b0) Data frame received for 5\nI0513 21:39:36.879485 2756 log.go:172] (0xc000ae4320) (5) Data frame handling\nI0513 21:39:36.879756 2756 log.go:172] (0xc0009440b0) Data frame received for 1\nI0513 21:39:36.879771 2756 log.go:172] (0xc000ae2140) (1) Data frame handling\nI0513 21:39:36.879784 2756 log.go:172] (0xc000ae2140) (1) Data frame sent\nI0513 21:39:36.879791 2756 log.go:172] (0xc0009440b0) (0xc000ae2140) Stream removed, broadcasting: 1\nI0513 21:39:36.879973 2756 log.go:172] (0xc0009440b0) (0xc000ae2140) Stream removed, broadcasting: 1\nI0513 21:39:36.879988 2756 log.go:172] (0xc0009440b0) (0xc000ae41e0) Stream removed, broadcasting: 3\nI0513 21:39:36.879994 2756 log.go:172] (0xc0009440b0) (0xc000ae4320) Stream removed, broadcasting: 5\nI0513 21:39:36.880070 2756 log.go:172] (0xc0009440b0) (0xc000ae41e0) Stream removed, broadcasting: 3\nI0513 21:39:36.880096 2756 log.go:172] (0xc0009440b0) (0xc000ae21e0) Stream removed, broadcasting: 7\nI0513 21:39:36.880390 2756 log.go:172] (0xc0009440b0) Go away received\n" May 13 21:39:36.898: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-690" for this suite. • [SLOW TEST:5.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":101,"skipped":1725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:38.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:39:39.483: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:39:41.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:39:43.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002779, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:39:46.533: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:46.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5128" for this suite. STEP: Destroying namespace "webhook-5128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.764 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":102,"skipped":1754,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:46.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ff106f2e-9e31-4a35-ae06-41606f00dc1d STEP: Creating a pod to test consume configMaps May 13 21:39:47.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845" in namespace "projected-4922" to be "success or failure" May 13 21:39:47.174: INFO: Pod "pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845": Phase="Pending", Reason="", readiness=false. Elapsed: 160.151222ms May 13 21:39:49.178: INFO: Pod "pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164385568s May 13 21:39:51.195: INFO: Pod "pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181658982s STEP: Saw pod success May 13 21:39:51.195: INFO: Pod "pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845" satisfied condition "success or failure" May 13 21:39:51.197: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845 container projected-configmap-volume-test: STEP: delete the pod May 13 21:39:51.239: INFO: Waiting for pod pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845 to disappear May 13 21:39:51.254: INFO: Pod pod-projected-configmaps-4dab7580-fa4d-4816-a01b-b898281ae845 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:39:51.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4922" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:39:51.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-368 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-368 I0513 21:39:51.411896 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-368, replica count: 2 I0513 21:39:54.462284 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:39:57.462497 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 21:39:57.462: INFO: Creating new exec pod May 13 21:40:02.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-368 execpodxqgq8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 13 21:40:02.749: INFO: stderr: "I0513 21:40:02.632776 2777 log.go:172] (0xc000321130) (0xc0008c06e0) Create stream\nI0513 21:40:02.632853 2777 log.go:172] (0xc000321130) (0xc0008c06e0) Stream added, broadcasting: 1\nI0513 21:40:02.637827 2777 log.go:172] (0xc000321130) Reply frame received for 1\nI0513 21:40:02.637879 2777 log.go:172] (0xc000321130) (0xc00062a5a0) Create stream\nI0513 21:40:02.637906 2777 log.go:172] (0xc000321130) (0xc00062a5a0) Stream added, broadcasting: 3\nI0513 21:40:02.638806 2777 log.go:172] (0xc000321130) Reply frame received for 3\nI0513 21:40:02.638828 2777 log.go:172] (0xc000321130) (0xc0004af360) Create stream\nI0513 21:40:02.638835 2777 log.go:172] (0xc000321130) (0xc0004af360) Stream added, broadcasting: 5\nI0513 21:40:02.639623 2777 log.go:172] (0xc000321130) Reply frame received for 5\nI0513 21:40:02.721352 2777 log.go:172] (0xc000321130) Data frame received for 5\nI0513 21:40:02.721382 2777 log.go:172] (0xc0004af360) (5) Data frame handling\nI0513 21:40:02.721395 2777 log.go:172] (0xc0004af360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0513 21:40:02.739760 2777 log.go:172] (0xc000321130) Data frame received for 5\nI0513 21:40:02.739780 2777 log.go:172] (0xc0004af360) (5) Data frame handling\nI0513 21:40:02.739790 2777 log.go:172] (0xc0004af360) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0513 21:40:02.740247 2777 log.go:172] (0xc000321130) Data frame received for 5\nI0513 21:40:02.740274 2777 log.go:172] (0xc0004af360) (5) Data frame handling\nI0513 21:40:02.740425 2777 log.go:172] (0xc000321130) Data frame received for 3\nI0513 21:40:02.740441 2777 log.go:172] (0xc00062a5a0) (3) Data frame handling\nI0513 21:40:02.742747 2777 log.go:172] (0xc000321130) Data frame received for 1\nI0513 21:40:02.742768 2777 log.go:172] (0xc0008c06e0) (1) Data frame handling\nI0513 21:40:02.742787 2777 log.go:172] (0xc0008c06e0) (1) Data frame sent\nI0513 21:40:02.742802 2777 log.go:172] (0xc000321130) (0xc0008c06e0) Stream removed, broadcasting: 1\nI0513 21:40:02.742811 2777 log.go:172] (0xc000321130) Go away received\nI0513 21:40:02.743191 2777 log.go:172] (0xc000321130) (0xc0008c06e0) Stream removed, broadcasting: 1\nI0513 21:40:02.743212 2777 log.go:172] (0xc000321130) (0xc00062a5a0) Stream removed, broadcasting: 3\nI0513 21:40:02.743222 2777 log.go:172] (0xc000321130) (0xc0004af360) Stream removed, broadcasting: 5\n" May 13 21:40:02.749: INFO: stdout: "" May 13 21:40:02.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-368 execpodxqgq8 -- /bin/sh -x -c nc -zv -t -w 2 10.111.161.201 80' May 13 21:40:02.939: INFO: stderr: "I0513 21:40:02.870042 2798 log.go:172] (0xc000a14790) (0xc0009d8000) Create stream\nI0513 21:40:02.870107 2798 log.go:172] (0xc000a14790) (0xc0009d8000) Stream added, broadcasting: 1\nI0513 21:40:02.872891 2798 log.go:172] (0xc000a14790) Reply frame received for 1\nI0513 21:40:02.872939 2798 log.go:172] (0xc000a14790) (0xc0006abb80) Create stream\nI0513 21:40:02.872954 2798 log.go:172] (0xc000a14790) (0xc0006abb80) Stream added, broadcasting: 3\nI0513 21:40:02.874212 2798 log.go:172] (0xc000a14790) Reply frame received for 3\nI0513 21:40:02.874272 2798 log.go:172] (0xc000a14790) (0xc000286000) Create stream\nI0513 21:40:02.874297 2798 log.go:172] (0xc000a14790) (0xc000286000) Stream added, broadcasting: 5\nI0513 21:40:02.875326 2798 log.go:172] (0xc000a14790) Reply frame received for 5\nI0513 21:40:02.932249 2798 log.go:172] (0xc000a14790) Data frame received for 3\nI0513 21:40:02.932272 2798 log.go:172] (0xc0006abb80) (3) Data frame handling\nI0513 21:40:02.932286 2798 log.go:172] (0xc000a14790) Data frame received for 5\nI0513 21:40:02.932294 2798 log.go:172] (0xc000286000) (5) Data frame handling\nI0513 21:40:02.932302 2798 log.go:172] (0xc000286000) (5) Data frame sent\nI0513 21:40:02.932307 2798 log.go:172] (0xc000a14790) Data frame received for 5\nI0513 21:40:02.932311 2798 log.go:172] (0xc000286000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.161.201 80\nConnection to 10.111.161.201 80 port [tcp/http] succeeded!\nI0513 21:40:02.934125 2798 log.go:172] (0xc000a14790) Data frame received for 1\nI0513 21:40:02.934143 2798 log.go:172] (0xc0009d8000) (1) Data frame handling\nI0513 21:40:02.934153 2798 log.go:172] (0xc0009d8000) (1) Data frame sent\nI0513 21:40:02.934163 2798 log.go:172] (0xc000a14790) (0xc0009d8000) Stream removed, broadcasting: 1\nI0513 21:40:02.934422 2798 log.go:172] (0xc000a14790) (0xc0009d8000) Stream removed, broadcasting: 1\nI0513 21:40:02.934441 2798 log.go:172] (0xc000a14790) (0xc0006abb80) Stream removed, broadcasting: 3\nI0513 21:40:02.934450 2798 log.go:172] (0xc000a14790) (0xc000286000) Stream removed, broadcasting: 5\n" May 13 21:40:02.939: INFO: stdout: "" May 13 21:40:02.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-368 execpodxqgq8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32345' May 13 21:40:03.122: INFO: stderr: "I0513 21:40:03.054047 2819 log.go:172] (0xc0009dc630) (0xc0006cdcc0) Create stream\nI0513 21:40:03.054114 2819 log.go:172] (0xc0009dc630) (0xc0006cdcc0) Stream added, broadcasting: 1\nI0513 21:40:03.056996 2819 log.go:172] (0xc0009dc630) Reply frame received for 1\nI0513 21:40:03.057027 2819 log.go:172] (0xc0009dc630) (0xc000457400) Create stream\nI0513 21:40:03.057036 2819 log.go:172] (0xc0009dc630) (0xc000457400) Stream added, broadcasting: 3\nI0513 21:40:03.058113 2819 log.go:172] (0xc0009dc630) Reply frame received for 3\nI0513 21:40:03.058138 2819 log.go:172] (0xc0009dc630) (0xc0006cdd60) Create stream\nI0513 21:40:03.058145 2819 log.go:172] (0xc0009dc630) (0xc0006cdd60) Stream added, broadcasting: 5\nI0513 21:40:03.058942 2819 log.go:172] (0xc0009dc630) Reply frame received for 5\nI0513 21:40:03.117096 2819 log.go:172] (0xc0009dc630) Data frame received for 3\nI0513 21:40:03.117338 2819 log.go:172] (0xc000457400) (3) Data frame handling\nI0513 21:40:03.117368 2819 log.go:172] (0xc0009dc630) Data frame received for 5\nI0513 21:40:03.117392 2819 log.go:172] (0xc0006cdd60) (5) Data frame handling\nI0513 21:40:03.117420 2819 log.go:172] (0xc0006cdd60) (5) Data frame sent\nI0513 21:40:03.117432 2819 log.go:172] (0xc0009dc630) Data frame received for 5\nI0513 21:40:03.117443 2819 log.go:172] (0xc0006cdd60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32345\nConnection to 172.17.0.10 32345 port [tcp/32345] succeeded!\nI0513 21:40:03.118327 2819 log.go:172] (0xc0009dc630) Data frame received for 1\nI0513 21:40:03.118342 2819 log.go:172] (0xc0006cdcc0) (1) Data frame handling\nI0513 21:40:03.118352 2819 log.go:172] (0xc0006cdcc0) (1) Data frame sent\nI0513 21:40:03.118361 2819 log.go:172] (0xc0009dc630) (0xc0006cdcc0) Stream removed, broadcasting: 1\nI0513 21:40:03.118465 2819 log.go:172] (0xc0009dc630) Go away received\nI0513 21:40:03.118587 2819 log.go:172] (0xc0009dc630) (0xc0006cdcc0) Stream removed, broadcasting: 1\nI0513 21:40:03.118600 2819 log.go:172] (0xc0009dc630) (0xc000457400) Stream removed, broadcasting: 3\nI0513 21:40:03.118606 2819 log.go:172] (0xc0009dc630) (0xc0006cdd60) Stream removed, broadcasting: 5\n" May 13 21:40:03.122: INFO: stdout: "" May 13 21:40:03.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-368 execpodxqgq8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32345' May 13 21:40:03.318: INFO: stderr: "I0513 21:40:03.253546 2841 log.go:172] (0xc000572d10) (0xc00093e000) Create stream\nI0513 21:40:03.253596 2841 log.go:172] (0xc000572d10) (0xc00093e000) Stream added, broadcasting: 1\nI0513 21:40:03.256269 2841 log.go:172] (0xc000572d10) Reply frame received for 1\nI0513 21:40:03.256320 2841 log.go:172] (0xc000572d10) (0xc000633ae0) Create stream\nI0513 21:40:03.256336 2841 log.go:172] (0xc000572d10) (0xc000633ae0) Stream added, broadcasting: 3\nI0513 21:40:03.257248 2841 log.go:172] (0xc000572d10) Reply frame received for 3\nI0513 21:40:03.257277 2841 log.go:172] (0xc000572d10) (0xc00093e0a0) Create stream\nI0513 21:40:03.257293 2841 log.go:172] (0xc000572d10) (0xc00093e0a0) Stream added, broadcasting: 5\nI0513 21:40:03.258037 2841 log.go:172] (0xc000572d10) Reply frame received for 5\nI0513 21:40:03.311610 2841 log.go:172] (0xc000572d10) Data frame received for 5\nI0513 21:40:03.311634 2841 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0513 21:40:03.311652 2841 log.go:172] (0xc00093e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32345\nI0513 21:40:03.312111 2841 log.go:172] (0xc000572d10) Data frame received for 5\nI0513 21:40:03.312124 2841 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0513 21:40:03.312131 2841 log.go:172] (0xc00093e0a0) (5) Data frame sent\nConnection to 172.17.0.8 32345 port [tcp/32345] succeeded!\nI0513 21:40:03.312429 2841 log.go:172] (0xc000572d10) Data frame received for 3\nI0513 21:40:03.312448 2841 log.go:172] (0xc000633ae0) (3) Data frame handling\nI0513 21:40:03.312476 2841 log.go:172] (0xc000572d10) Data frame received for 5\nI0513 21:40:03.312486 2841 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0513 21:40:03.314202 2841 log.go:172] (0xc000572d10) Data frame received for 1\nI0513 21:40:03.314218 2841 log.go:172] (0xc00093e000) (1) Data frame handling\nI0513 21:40:03.314234 2841 log.go:172] (0xc00093e000) (1) Data frame sent\nI0513 21:40:03.314296 2841 log.go:172] (0xc000572d10) (0xc00093e000) Stream removed, broadcasting: 1\nI0513 21:40:03.314334 2841 log.go:172] (0xc000572d10) Go away received\nI0513 21:40:03.314548 2841 log.go:172] (0xc000572d10) (0xc00093e000) Stream removed, broadcasting: 1\nI0513 21:40:03.314560 2841 log.go:172] (0xc000572d10) (0xc000633ae0) Stream removed, broadcasting: 3\nI0513 21:40:03.314567 2841 log.go:172] (0xc000572d10) (0xc00093e0a0) Stream removed, broadcasting: 5\n" May 13 21:40:03.318: INFO: stdout: "" May 13 21:40:03.318: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:40:03.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-368" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.129 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":104,"skipped":1871,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:40:03.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-3a646fbc-df78-4695-a998-d53bcd7810f2 in namespace container-probe-5951 May 13 21:40:07.478: INFO: Started pod liveness-3a646fbc-df78-4695-a998-d53bcd7810f2 in namespace container-probe-5951 STEP: checking the pod's current state and verifying that restartCount is present May 13 21:40:07.480: INFO: Initial restart count of pod liveness-3a646fbc-df78-4695-a998-d53bcd7810f2 is 0 May 13 21:40:31.552: INFO: Restart count of pod container-probe-5951/liveness-3a646fbc-df78-4695-a998-d53bcd7810f2 is now 1 (24.072229214s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:40:31.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5951" for this suite. • [SLOW TEST:28.212 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1882,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:40:31.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 13 21:40:31.655: INFO: PodSpec: initContainers in spec.initContainers May 13 21:41:22.035: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-483fc58c-0dd2-46c2-8adb-50775161998c", GenerateName:"", Namespace:"init-container-2298", SelfLink:"/api/v1/namespaces/init-container-2298/pods/pod-init-483fc58c-0dd2-46c2-8adb-50775161998c", UID:"d934f677-396a-4484-af5e-a6279780fd31", ResourceVersion:"15946671", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725002831, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"655361433"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vqlmc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004622000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vqlmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vqlmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vqlmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e7e068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b723c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e7e0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e7e110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e7e118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e7e11c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002832, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002832, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002832, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002831, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.201", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.201"}}, StartTime:(*v1.Time)(0xc004034040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004034080), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022be0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://65274436ed82d4b12f40c9611d5038a6957ab50c41b9cc0ba4c2aefa2123c215", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0040340a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004034060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001e7e1cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:22.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2298" for this suite. • [SLOW TEST:50.510 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":106,"skipped":1884,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:22.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:41:22.251: INFO: Creating deployment "test-recreate-deployment" May 13 21:41:22.271: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 13 21:41:22.341: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 13 21:41:24.347: INFO: Waiting deployment "test-recreate-deployment" to complete May 13 21:41:24.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002882, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002882, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002882, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725002882, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:41:26.365: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 13 21:41:26.370: INFO: Updating deployment test-recreate-deployment May 13 21:41:26.370: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 13 21:41:26.648: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5445 /apis/apps/v1/namespaces/deployment-5445/deployments/test-recreate-deployment ed5322b0-4a07-43ea-bab3-47e93d9a8f58 15946732 2 2020-05-13 21:41:22 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008776b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-13 21:41:26 +0000 UTC,LastTransitionTime:2020-05-13 21:41:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-13 21:41:26 +0000 UTC,LastTransitionTime:2020-05-13 21:41:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 13 21:41:26.726: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5445 /apis/apps/v1/namespaces/deployment-5445/replicasets/test-recreate-deployment-5f94c574ff a59d74bd-55ef-43df-a5b2-015b3a15a699 15946730 1 2020-05-13 21:41:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ed5322b0-4a07-43ea-bab3-47e93d9a8f58 0xc000877a57 0xc000877a58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000877ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 21:41:26.726: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 13 21:41:26.726: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5445 /apis/apps/v1/namespaces/deployment-5445/replicasets/test-recreate-deployment-799c574856 8de247f2-c7f5-421a-bb25-0ccc6b44c27b 15946721 2 2020-05-13 21:41:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ed5322b0-4a07-43ea-bab3-47e93d9a8f58 0xc000877b27 0xc000877b28}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000877b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 21:41:26.731: INFO: Pod "test-recreate-deployment-5f94c574ff-8f5c9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-8f5c9 test-recreate-deployment-5f94c574ff- deployment-5445 /api/v1/namespaces/deployment-5445/pods/test-recreate-deployment-5f94c574ff-8f5c9 84b5c222-93c5-46d8-a97c-0f9868d934db 15946733 0 2020-05-13 21:41:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a59d74bd-55ef-43df-a5b2-015b3a15a699 0xc000877fe7 0xc000877fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjkdl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjkdl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjkdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-13 21:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:26.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5445" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":107,"skipped":1916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:26.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1720.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1720.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1720.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1720.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:41:35.233: INFO: DNS probes using dns-1720/dns-test-86175a76-f0e6-4170-b2f1-ea8f768bb1fd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:35.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1720" for this suite. • [SLOW TEST:8.536 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":108,"skipped":1951,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:35.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 21:41:40.995: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:41.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7359" for this suite. • [SLOW TEST:5.718 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:41.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 13 21:41:41.192: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:41.199: INFO: Number of nodes with available pods: 0 May 13 21:41:41.199: INFO: Node jerma-worker is running more than one daemon pod May 13 21:41:42.256: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:42.287: INFO: Number of nodes with available pods: 0 May 13 21:41:42.287: INFO: Node jerma-worker is running more than one daemon pod May 13 21:41:43.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:43.339: INFO: Number of nodes with available pods: 0 May 13 21:41:43.339: INFO: Node jerma-worker is running more than one daemon pod May 13 21:41:44.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:44.260: INFO: Number of nodes with available pods: 0 May 13 21:41:44.260: INFO: Node jerma-worker is running more than one daemon pod May 13 21:41:45.202: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:45.204: INFO: Number of nodes with available pods: 1 May 13 21:41:45.205: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:46.209: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:46.212: INFO: Number of nodes with available pods: 2 May 13 21:41:46.212: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 13 21:41:46.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:46.340: INFO: Number of nodes with available pods: 1 May 13 21:41:46.340: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:47.510: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:47.577: INFO: Number of nodes with available pods: 1 May 13 21:41:47.577: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:48.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:48.349: INFO: Number of nodes with available pods: 1 May 13 21:41:48.350: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:49.379: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:49.382: INFO: Number of nodes with available pods: 1 May 13 21:41:49.382: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:50.346: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:50.349: INFO: Number of nodes with available pods: 1 May 13 21:41:50.349: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:41:51.346: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:41:51.350: INFO: Number of nodes with available pods: 2 May 13 21:41:51.350: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4829, will wait for the garbage collector to delete the pods May 13 21:41:51.414: INFO: Deleting DaemonSet.extensions daemon-set took: 6.982411ms May 13 21:41:51.514: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.31536ms May 13 21:41:59.517: INFO: Number of nodes with available pods: 0 May 13 21:41:59.517: INFO: Number of running nodes: 0, number of available pods: 0 May 13 21:41:59.519: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4829/daemonsets","resourceVersion":"15946988"},"items":null} May 13 21:41:59.521: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4829/pods","resourceVersion":"15946988"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:59.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4829" for this suite. • [SLOW TEST:18.468 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":110,"skipped":2011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:59.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:41:59.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1440" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":111,"skipped":2052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:41:59.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 13 21:41:59.803: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:42:07.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8395" for this suite. • [SLOW TEST:7.350 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":112,"skipped":2084,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:42:07.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-04423c2b-9edc-4bdd-8c39-242e5ad078fb STEP: Creating configMap with name cm-test-opt-upd-4be7459f-4503-413c-b953-f812d3eeea56 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-04423c2b-9edc-4bdd-8c39-242e5ad078fb STEP: Updating configmap cm-test-opt-upd-4be7459f-4503-413c-b953-f812d3eeea56 STEP: Creating configMap with name cm-test-opt-create-6992dc01-9b0e-4f34-a00e-cca519655aed STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:43:31.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6064" for this suite. • [SLOW TEST:84.877 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":2093,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:43:31.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:43:32.596: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:43:34.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003012, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003012, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003012, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003012, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:43:37.808: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 13 21:43:44.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-844 to-be-attached-pod -i -c=container1' May 13 21:43:44.281: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:43:44.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-844" for this suite. STEP: Destroying namespace "webhook-844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.549 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":114,"skipped":2101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:43:44.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 13 21:43:44.573: INFO: Waiting up to 5m0s for pod "pod-d14cab82-05e3-49c5-8f23-45298e42708f" in namespace "emptydir-5860" to be "success or failure" May 13 21:43:44.584: INFO: Pod "pod-d14cab82-05e3-49c5-8f23-45298e42708f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082215ms May 13 21:43:46.587: INFO: Pod "pod-d14cab82-05e3-49c5-8f23-45298e42708f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013096702s May 13 21:43:48.620: INFO: Pod "pod-d14cab82-05e3-49c5-8f23-45298e42708f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04661576s STEP: Saw pod success May 13 21:43:48.620: INFO: Pod "pod-d14cab82-05e3-49c5-8f23-45298e42708f" satisfied condition "success or failure" May 13 21:43:48.696: INFO: Trying to get logs from node jerma-worker2 pod pod-d14cab82-05e3-49c5-8f23-45298e42708f container test-container: STEP: delete the pod May 13 21:43:48.791: INFO: Waiting for pod pod-d14cab82-05e3-49c5-8f23-45298e42708f to disappear May 13 21:43:48.823: INFO: Pod pod-d14cab82-05e3-49c5-8f23-45298e42708f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:43:48.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5860" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":2156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:43:48.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 13 21:43:48.880: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:43:57.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5857" for this suite. • [SLOW TEST:8.408 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":116,"skipped":2182,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:43:57.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-f30a6672-ea9e-4f99-8c73-fbff6a5f572a STEP: Creating a pod to test consume secrets May 13 21:43:57.318: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010" in namespace "projected-1436" to be "success or failure" May 13 21:43:57.322: INFO: Pod "pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010": Phase="Pending", Reason="", readiness=false. Elapsed: 3.885707ms May 13 21:43:59.325: INFO: Pod "pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007780827s May 13 21:44:01.330: INFO: Pod "pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012217278s STEP: Saw pod success May 13 21:44:01.330: INFO: Pod "pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010" satisfied condition "success or failure" May 13 21:44:01.333: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010 container projected-secret-volume-test: STEP: delete the pod May 13 21:44:01.411: INFO: Waiting for pod pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010 to disappear May 13 21:44:01.418: INFO: Pod pod-projected-secrets-ed907719-b942-40fa-845a-5db5c8ca3010 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:44:01.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1436" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":2183,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:44:01.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 13 21:44:02.154: INFO: Pod name wrapped-volume-race-aee21228-553b-4836-9335-b4d3f7da6cdb: Found 0 pods out of 5 May 13 21:44:07.162: INFO: Pod name wrapped-volume-race-aee21228-553b-4836-9335-b4d3f7da6cdb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aee21228-553b-4836-9335-b4d3f7da6cdb in namespace emptydir-wrapper-9634, will wait for the garbage collector to delete the pods May 13 21:44:21.235: INFO: Deleting ReplicationController wrapped-volume-race-aee21228-553b-4836-9335-b4d3f7da6cdb took: 4.769848ms May 13 21:44:21.635: INFO: Terminating ReplicationController wrapped-volume-race-aee21228-553b-4836-9335-b4d3f7da6cdb pods took: 400.187383ms STEP: Creating RC which spawns configmap-volume pods May 13 21:44:39.772: INFO: Pod name wrapped-volume-race-7f90d42a-cb4a-47cb-8961-62ed549e9420: Found 0 pods out of 5 May 13 21:44:44.778: INFO: Pod name wrapped-volume-race-7f90d42a-cb4a-47cb-8961-62ed549e9420: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7f90d42a-cb4a-47cb-8961-62ed549e9420 in namespace emptydir-wrapper-9634, will wait for the garbage collector to delete the pods May 13 21:44:58.878: INFO: Deleting ReplicationController wrapped-volume-race-7f90d42a-cb4a-47cb-8961-62ed549e9420 took: 14.258404ms May 13 21:44:59.278: INFO: Terminating ReplicationController wrapped-volume-race-7f90d42a-cb4a-47cb-8961-62ed549e9420 pods took: 400.23612ms STEP: Creating RC which spawns configmap-volume pods May 13 21:45:09.408: INFO: Pod name wrapped-volume-race-3f3e18cc-28f2-4060-8716-9a400734dbe9: Found 0 pods out of 5 May 13 21:45:14.413: INFO: Pod name wrapped-volume-race-3f3e18cc-28f2-4060-8716-9a400734dbe9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3f3e18cc-28f2-4060-8716-9a400734dbe9 in namespace emptydir-wrapper-9634, will wait for the garbage collector to delete the pods May 13 21:45:30.493: INFO: Deleting ReplicationController wrapped-volume-race-3f3e18cc-28f2-4060-8716-9a400734dbe9 took: 8.27581ms May 13 21:45:30.794: INFO: Terminating ReplicationController wrapped-volume-race-3f3e18cc-28f2-4060-8716-9a400734dbe9 pods took: 300.275713ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:45:41.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9634" for this suite. • [SLOW TEST:99.619 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":118,"skipped":2188,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:45:41.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 13 21:45:41.205: INFO: Waiting up to 5m0s for pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d" in namespace "downward-api-3730" to be "success or failure" May 13 21:45:41.279: INFO: Pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d": Phase="Pending", Reason="", readiness=false. Elapsed: 73.75588ms May 13 21:45:43.283: INFO: Pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077901843s May 13 21:45:45.288: INFO: Pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08226019s May 13 21:45:47.299: INFO: Pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094065186s STEP: Saw pod success May 13 21:45:47.300: INFO: Pod "downward-api-c7f293a5-9f0a-4416-9c32-13659615942d" satisfied condition "success or failure" May 13 21:45:47.305: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c7f293a5-9f0a-4416-9c32-13659615942d container dapi-container: STEP: delete the pod May 13 21:45:47.373: INFO: Waiting for pod downward-api-c7f293a5-9f0a-4416-9c32-13659615942d to disappear May 13 21:45:47.377: INFO: Pod downward-api-c7f293a5-9f0a-4416-9c32-13659615942d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:45:47.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3730" for this suite. • [SLOW TEST:6.346 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2190,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:45:47.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1490 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 13 21:45:47.520: INFO: Found 0 stateful pods, waiting for 3 May 13 21:45:57.525: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:45:57.525: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:45:57.525: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 13 21:46:07.525: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:46:07.525: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:46:07.525: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 13 21:46:07.552: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 13 21:46:17.623: INFO: Updating stateful set ss2 May 13 21:46:17.673: INFO: Waiting for Pod statefulset-1490/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 13 21:46:27.681: INFO: Waiting for Pod statefulset-1490/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 13 21:46:38.384: INFO: Found 2 stateful pods, waiting for 3 May 13 21:46:48.389: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:46:48.389: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:46:48.389: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 13 21:46:48.412: INFO: Updating stateful set ss2 May 13 21:46:48.417: INFO: Waiting for Pod statefulset-1490/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 13 21:46:58.425: INFO: Waiting for Pod statefulset-1490/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 13 21:47:08.440: INFO: Updating stateful set ss2 May 13 21:47:08.580: INFO: Waiting for StatefulSet statefulset-1490/ss2 to complete update May 13 21:47:08.580: INFO: Waiting for Pod statefulset-1490/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 13 21:47:18.587: INFO: Waiting for StatefulSet statefulset-1490/ss2 to complete update May 13 21:47:18.588: INFO: Waiting for Pod statefulset-1490/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 21:47:28.588: INFO: Deleting all statefulset in ns statefulset-1490 May 13 21:47:28.590: INFO: Scaling statefulset ss2 to 0 May 13 21:47:58.626: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:47:58.628: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:47:58.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1490" for this suite. • [SLOW TEST:131.430 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":120,"skipped":2204,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:47:58.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-af9bf443-3795-4120-aedd-d20ea82b1d20 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:47:58.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5920" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":121,"skipped":2206,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:47:58.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 13 21:48:07.066: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:07.112: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:09.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:09.115: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:11.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:11.115: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:13.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:13.116: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:15.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:15.116: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:17.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:17.116: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:19.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:19.115: INFO: Pod pod-with-prestop-exec-hook still exists May 13 21:48:21.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 21:48:21.115: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:21.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2600" for this suite. • [SLOW TEST:22.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2207,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:21.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-62bd1f32-5d86-4e2b-abe6-f369165dbef0 STEP: Creating a pod to test consume configMaps May 13 21:48:21.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162" in namespace "projected-6681" to be "success or failure" May 13 21:48:21.337: INFO: Pod "pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162": Phase="Pending", Reason="", readiness=false. Elapsed: 21.034778ms May 13 21:48:23.342: INFO: Pod "pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025276254s May 13 21:48:25.347: INFO: Pod "pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030545345s STEP: Saw pod success May 13 21:48:25.347: INFO: Pod "pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162" satisfied condition "success or failure" May 13 21:48:25.350: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162 container projected-configmap-volume-test: STEP: delete the pod May 13 21:48:25.451: INFO: Waiting for pod pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162 to disappear May 13 21:48:25.607: INFO: Pod pod-projected-configmaps-a934d99f-95da-42c4-a6e8-2dd559833162 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:25.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6681" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2212,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:25.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:48:26.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:48:28.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:48:30.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:48:33.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:33.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-774" for this suite. STEP: Destroying namespace "webhook-774-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.388 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":124,"skipped":2221,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:34.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 21:48:34.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b" in namespace "downward-api-7535" to be "success or failure" May 13 21:48:34.076: INFO: Pod "downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082359ms May 13 21:48:36.264: INFO: Pod "downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191995839s May 13 21:48:38.270: INFO: Pod "downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197778057s STEP: Saw pod success May 13 21:48:38.270: INFO: Pod "downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b" satisfied condition "success or failure" May 13 21:48:38.272: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b container client-container: STEP: delete the pod May 13 21:48:38.349: INFO: Waiting for pod downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b to disappear May 13 21:48:38.353: INFO: Pod downwardapi-volume-f198dd2d-e31a-4131-9f42-1258dd4b071b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:38.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7535" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2226,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:38.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-fe25b726-38a0-4e89-9800-eefaa7665701 STEP: Creating a pod to test consume secrets May 13 21:48:38.764: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95" in namespace "projected-7610" to be "success or failure" May 13 21:48:38.803: INFO: Pod "pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95": Phase="Pending", Reason="", readiness=false. Elapsed: 38.88507ms May 13 21:48:40.807: INFO: Pod "pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043471916s May 13 21:48:42.812: INFO: Pod "pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04796532s STEP: Saw pod success May 13 21:48:42.812: INFO: Pod "pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95" satisfied condition "success or failure" May 13 21:48:42.815: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95 container projected-secret-volume-test: STEP: delete the pod May 13 21:48:42.834: INFO: Waiting for pod pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95 to disappear May 13 21:48:42.838: INFO: Pod pod-projected-secrets-d3dd4523-f160-4706-9bd8-fda28b189b95 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:42.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7610" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:42.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-696db33b-579a-4c1e-81ac-627714bc7958 STEP: Creating a pod to test consume secrets May 13 21:48:42.964: INFO: Waiting up to 5m0s for pod "pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9" in namespace "secrets-9879" to be "success or failure" May 13 21:48:42.982: INFO: Pod "pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.617558ms May 13 21:48:44.986: INFO: Pod "pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02192496s May 13 21:48:46.990: INFO: Pod "pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025967128s STEP: Saw pod success May 13 21:48:46.990: INFO: Pod "pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9" satisfied condition "success or failure" May 13 21:48:46.993: INFO: Trying to get logs from node jerma-worker pod pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9 container secret-volume-test: STEP: delete the pod May 13 21:48:47.018: INFO: Waiting for pod pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9 to disappear May 13 21:48:47.029: INFO: Pod pod-secrets-020cbf65-7e5c-4f91-9234-53a66004eba9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:48:47.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9879" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:48:47.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8196 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 21:48:47.156: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 13 21:49:09.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.111:8080/dial?request=hostname&protocol=http&host=10.244.1.110&port=8080&tries=1'] Namespace:pod-network-test-8196 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:49:09.307: INFO: >>> kubeConfig: /root/.kube/config I0513 21:49:09.339218 6 log.go:172] (0xc0027574a0) (0xc001e2a640) Create stream I0513 21:49:09.339257 6 log.go:172] (0xc0027574a0) (0xc001e2a640) Stream added, broadcasting: 1 I0513 21:49:09.341648 6 log.go:172] (0xc0027574a0) Reply frame received for 1 I0513 21:49:09.341690 6 log.go:172] (0xc0027574a0) (0xc00227b5e0) Create stream I0513 21:49:09.341705 6 log.go:172] (0xc0027574a0) (0xc00227b5e0) Stream added, broadcasting: 3 I0513 21:49:09.342785 6 log.go:172] (0xc0027574a0) Reply frame received for 3 I0513 21:49:09.342827 6 log.go:172] (0xc0027574a0) (0xc002692000) Create stream I0513 21:49:09.342842 6 log.go:172] (0xc0027574a0) (0xc002692000) Stream added, broadcasting: 5 I0513 21:49:09.343758 6 log.go:172] (0xc0027574a0) Reply frame received for 5 I0513 21:49:09.429352 6 log.go:172] (0xc0027574a0) Data frame received for 3 I0513 21:49:09.429377 6 log.go:172] (0xc00227b5e0) (3) Data frame handling I0513 21:49:09.429392 6 log.go:172] (0xc00227b5e0) (3) Data frame sent I0513 21:49:09.430076 6 log.go:172] (0xc0027574a0) Data frame received for 5 I0513 21:49:09.430131 6 log.go:172] (0xc002692000) (5) Data frame handling I0513 21:49:09.430165 6 log.go:172] (0xc0027574a0) Data frame received for 3 I0513 21:49:09.430184 6 log.go:172] (0xc00227b5e0) (3) Data frame handling I0513 21:49:09.431876 6 log.go:172] (0xc0027574a0) Data frame received for 1 I0513 21:49:09.431903 6 log.go:172] (0xc001e2a640) (1) Data frame handling I0513 21:49:09.431927 6 log.go:172] (0xc001e2a640) (1) Data frame sent I0513 21:49:09.432176 6 log.go:172] (0xc0027574a0) (0xc001e2a640) Stream removed, broadcasting: 1 I0513 21:49:09.432352 6 log.go:172] (0xc0027574a0) (0xc001e2a640) Stream removed, broadcasting: 1 I0513 21:49:09.432407 6 log.go:172] (0xc0027574a0) (0xc00227b5e0) Stream removed, broadcasting: 3 I0513 21:49:09.432461 6 log.go:172] (0xc0027574a0) (0xc002692000) Stream removed, broadcasting: 5 May 13 21:49:09.432: INFO: Waiting for responses: map[] I0513 21:49:09.432557 6 log.go:172] (0xc0027574a0) Go away received May 13 21:49:09.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.111:8080/dial?request=hostname&protocol=http&host=10.244.2.221&port=8080&tries=1'] Namespace:pod-network-test-8196 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:49:09.436: INFO: >>> kubeConfig: /root/.kube/config I0513 21:49:09.477706 6 log.go:172] (0xc0025342c0) (0xc002234500) Create stream I0513 21:49:09.477739 6 log.go:172] (0xc0025342c0) (0xc002234500) Stream added, broadcasting: 1 I0513 21:49:09.479203 6 log.go:172] (0xc0025342c0) Reply frame received for 1 I0513 21:49:09.479244 6 log.go:172] (0xc0025342c0) (0xc001e2a780) Create stream I0513 21:49:09.479255 6 log.go:172] (0xc0025342c0) (0xc001e2a780) Stream added, broadcasting: 3 I0513 21:49:09.479871 6 log.go:172] (0xc0025342c0) Reply frame received for 3 I0513 21:49:09.479903 6 log.go:172] (0xc0025342c0) (0xc001e2a820) Create stream I0513 21:49:09.479912 6 log.go:172] (0xc0025342c0) (0xc001e2a820) Stream added, broadcasting: 5 I0513 21:49:09.481236 6 log.go:172] (0xc0025342c0) Reply frame received for 5 I0513 21:49:09.546219 6 log.go:172] (0xc0025342c0) Data frame received for 3 I0513 21:49:09.546247 6 log.go:172] (0xc001e2a780) (3) Data frame handling I0513 21:49:09.546263 6 log.go:172] (0xc001e2a780) (3) Data frame sent I0513 21:49:09.547139 6 log.go:172] (0xc0025342c0) Data frame received for 3 I0513 21:49:09.547153 6 log.go:172] (0xc001e2a780) (3) Data frame handling I0513 21:49:09.547194 6 log.go:172] (0xc0025342c0) Data frame received for 5 I0513 21:49:09.547224 6 log.go:172] (0xc001e2a820) (5) Data frame handling I0513 21:49:09.549465 6 log.go:172] (0xc0025342c0) Data frame received for 1 I0513 21:49:09.549518 6 log.go:172] (0xc002234500) (1) Data frame handling I0513 21:49:09.549547 6 log.go:172] (0xc002234500) (1) Data frame sent I0513 21:49:09.549573 6 log.go:172] (0xc0025342c0) (0xc002234500) Stream removed, broadcasting: 1 I0513 21:49:09.549689 6 log.go:172] (0xc0025342c0) (0xc002234500) Stream removed, broadcasting: 1 I0513 21:49:09.549739 6 log.go:172] (0xc0025342c0) (0xc001e2a780) Stream removed, broadcasting: 3 I0513 21:49:09.549905 6 log.go:172] (0xc0025342c0) Go away received I0513 21:49:09.549953 6 log.go:172] (0xc0025342c0) (0xc001e2a820) Stream removed, broadcasting: 5 May 13 21:49:09.550: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:09.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8196" for this suite. • [SLOW TEST:22.524 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:09.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:49:09.679: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-83c6252a-db01-470d-9a78-07bb3c67ae72" in namespace "security-context-test-4630" to be "success or failure" May 13 21:49:09.683: INFO: Pod "alpine-nnp-false-83c6252a-db01-470d-9a78-07bb3c67ae72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.288833ms May 13 21:49:11.687: INFO: Pod "alpine-nnp-false-83c6252a-db01-470d-9a78-07bb3c67ae72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007483558s May 13 21:49:13.690: INFO: Pod "alpine-nnp-false-83c6252a-db01-470d-9a78-07bb3c67ae72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011123283s May 13 21:49:13.691: INFO: Pod "alpine-nnp-false-83c6252a-db01-470d-9a78-07bb3c67ae72" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:13.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4630" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2340,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:13.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:49:14.389: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:49:17.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:49:19.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:49:22.221: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:22.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7103" for this suite. STEP: Destroying namespace "webhook-7103-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.891 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":130,"skipped":2348,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:22.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 13 21:49:22.707: INFO: Pod name pod-release: Found 0 pods out of 1 May 13 21:49:27.763: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:27.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5001" for this suite. • [SLOW TEST:5.543 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":131,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:28.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3975.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3975.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3975.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3975.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:49:36.499: INFO: DNS probes using dns-3975/dns-test-e7d30efa-1325-41cd-8d0c-a6a3d3ef6bc4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3975" for this suite. • [SLOW TEST:8.484 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":132,"skipped":2376,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:36.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:49:36.992: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 13 21:49:40.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' May 13 21:49:43.661: INFO: stderr: "" May 13 21:49:43.661: INFO: stdout: "e2e-test-crd-publish-openapi-9410-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 13 21:49:43.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 delete e2e-test-crd-publish-openapi-9410-crds test-foo' May 13 21:49:43.770: INFO: stderr: "" May 13 21:49:43.770: INFO: stdout: "e2e-test-crd-publish-openapi-9410-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 13 21:49:43.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' May 13 21:49:44.019: INFO: stderr: "" May 13 21:49:44.019: INFO: stdout: "e2e-test-crd-publish-openapi-9410-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 13 21:49:44.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 delete e2e-test-crd-publish-openapi-9410-crds test-foo' May 13 21:49:44.130: INFO: stderr: "" May 13 21:49:44.131: INFO: stdout: "e2e-test-crd-publish-openapi-9410-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 13 21:49:44.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' May 13 21:49:44.375: INFO: rc: 1 May 13 21:49:44.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' May 13 21:49:44.627: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 13 21:49:44.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' May 13 21:49:44.844: INFO: rc: 1 May 13 21:49:44.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' May 13 21:49:45.132: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 13 21:49:45.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9410-crds' May 13 21:49:45.404: INFO: stderr: "" May 13 21:49:45.404: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9410-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 13 21:49:45.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9410-crds.metadata' May 13 21:49:45.669: INFO: stderr: "" May 13 21:49:45.669: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9410-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 13 21:49:45.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9410-crds.spec' May 13 21:49:45.974: INFO: stderr: "" May 13 21:49:45.974: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9410-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 13 21:49:45.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9410-crds.spec.bars' May 13 21:49:46.307: INFO: stderr: "" May 13 21:49:46.307: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9410-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 13 21:49:46.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9410-crds.spec.bars2' May 13 21:49:46.590: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:49:49.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8509" for this suite. • [SLOW TEST:12.869 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":133,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:49:49.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5309 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 21:49:49.632: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 13 21:50:17.813: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.114 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5309 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:50:17.814: INFO: >>> kubeConfig: /root/.kube/config I0513 21:50:17.849881 6 log.go:172] (0xc00147a420) (0xc002235540) Create stream I0513 21:50:17.849919 6 log.go:172] (0xc00147a420) (0xc002235540) Stream added, broadcasting: 1 I0513 21:50:17.851912 6 log.go:172] (0xc00147a420) Reply frame received for 1 I0513 21:50:17.851976 6 log.go:172] (0xc00147a420) (0xc0019c2780) Create stream I0513 21:50:17.851993 6 log.go:172] (0xc00147a420) (0xc0019c2780) Stream added, broadcasting: 3 I0513 21:50:17.853338 6 log.go:172] (0xc00147a420) Reply frame received for 3 I0513 21:50:17.853368 6 log.go:172] (0xc00147a420) (0xc001e2bae0) Create stream I0513 21:50:17.853381 6 log.go:172] (0xc00147a420) (0xc001e2bae0) Stream added, broadcasting: 5 I0513 21:50:17.855181 6 log.go:172] (0xc00147a420) Reply frame received for 5 I0513 21:50:18.925527 6 log.go:172] (0xc00147a420) Data frame received for 3 I0513 21:50:18.925554 6 log.go:172] (0xc0019c2780) (3) Data frame handling I0513 21:50:18.925565 6 log.go:172] (0xc0019c2780) (3) Data frame sent I0513 21:50:18.925575 6 log.go:172] (0xc00147a420) Data frame received for 5 I0513 21:50:18.925584 6 log.go:172] (0xc001e2bae0) (5) Data frame handling I0513 21:50:18.925619 6 log.go:172] (0xc00147a420) Data frame received for 3 I0513 21:50:18.925630 6 log.go:172] (0xc0019c2780) (3) Data frame handling I0513 21:50:18.927101 6 log.go:172] (0xc00147a420) Data frame received for 1 I0513 21:50:18.927118 6 log.go:172] (0xc002235540) (1) Data frame handling I0513 21:50:18.927131 6 log.go:172] (0xc002235540) (1) Data frame sent I0513 21:50:18.927145 6 log.go:172] (0xc00147a420) (0xc002235540) Stream removed, broadcasting: 1 I0513 21:50:18.927156 6 log.go:172] (0xc00147a420) Go away received I0513 21:50:18.927227 6 log.go:172] (0xc00147a420) (0xc002235540) Stream removed, broadcasting: 1 I0513 21:50:18.927256 6 log.go:172] (0xc00147a420) (0xc0019c2780) Stream removed, broadcasting: 3 I0513 21:50:18.927272 6 log.go:172] (0xc00147a420) (0xc001e2bae0) Stream removed, broadcasting: 5 May 13 21:50:18.927: INFO: Found all expected endpoints: [netserver-0] May 13 21:50:18.930: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.226 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5309 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:50:18.930: INFO: >>> kubeConfig: /root/.kube/config I0513 21:50:18.956027 6 log.go:172] (0xc001a18370) (0xc0019c3040) Create stream I0513 21:50:18.956065 6 log.go:172] (0xc001a18370) (0xc0019c3040) Stream added, broadcasting: 1 I0513 21:50:18.957854 6 log.go:172] (0xc001a18370) Reply frame received for 1 I0513 21:50:18.957888 6 log.go:172] (0xc001a18370) (0xc00215ae60) Create stream I0513 21:50:18.957907 6 log.go:172] (0xc001a18370) (0xc00215ae60) Stream added, broadcasting: 3 I0513 21:50:18.958821 6 log.go:172] (0xc001a18370) Reply frame received for 3 I0513 21:50:18.958846 6 log.go:172] (0xc001a18370) (0xc0019c3180) Create stream I0513 21:50:18.958861 6 log.go:172] (0xc001a18370) (0xc0019c3180) Stream added, broadcasting: 5 I0513 21:50:18.959514 6 log.go:172] (0xc001a18370) Reply frame received for 5 I0513 21:50:20.041463 6 log.go:172] (0xc001a18370) Data frame received for 3 I0513 21:50:20.041524 6 log.go:172] (0xc00215ae60) (3) Data frame handling I0513 21:50:20.041595 6 log.go:172] (0xc00215ae60) (3) Data frame sent I0513 21:50:20.041672 6 log.go:172] (0xc001a18370) Data frame received for 5 I0513 21:50:20.041751 6 log.go:172] (0xc0019c3180) (5) Data frame handling I0513 21:50:20.042149 6 log.go:172] (0xc001a18370) Data frame received for 3 I0513 21:50:20.042182 6 log.go:172] (0xc00215ae60) (3) Data frame handling I0513 21:50:20.042991 6 log.go:172] (0xc001a18370) Data frame received for 1 I0513 21:50:20.043022 6 log.go:172] (0xc0019c3040) (1) Data frame handling I0513 21:50:20.043041 6 log.go:172] (0xc0019c3040) (1) Data frame sent I0513 21:50:20.043122 6 log.go:172] (0xc001a18370) (0xc0019c3040) Stream removed, broadcasting: 1 I0513 21:50:20.043203 6 log.go:172] (0xc001a18370) Go away received I0513 21:50:20.043288 6 log.go:172] (0xc001a18370) (0xc0019c3040) Stream removed, broadcasting: 1 I0513 21:50:20.043386 6 log.go:172] (0xc001a18370) (0xc00215ae60) Stream removed, broadcasting: 3 I0513 21:50:20.043459 6 log.go:172] (0xc001a18370) (0xc0019c3180) Stream removed, broadcasting: 5 May 13 21:50:20.043: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:50:20.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5309" for this suite. • [SLOW TEST:30.557 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:50:20.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:50:26.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9175" for this suite. • [SLOW TEST:6.308 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2428,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:50:26.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9595 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9595 STEP: creating replication controller externalsvc in namespace services-9595 I0513 21:50:27.202658 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9595, replica count: 2 I0513 21:50:30.253371 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:50:33.253602 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 13 21:50:33.295: INFO: Creating new exec pod May 13 21:50:37.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9595 execpodnck45 -- /bin/sh -x -c nslookup clusterip-service' May 13 21:50:37.567: INFO: stderr: "I0513 21:50:37.454844 3161 log.go:172] (0xc000afa0b0) (0xc000aee0a0) Create stream\nI0513 21:50:37.454906 3161 log.go:172] (0xc000afa0b0) (0xc000aee0a0) Stream added, broadcasting: 1\nI0513 21:50:37.457986 3161 log.go:172] (0xc000afa0b0) Reply frame received for 1\nI0513 21:50:37.458034 3161 log.go:172] (0xc000afa0b0) (0xc00092e000) Create stream\nI0513 21:50:37.458053 3161 log.go:172] (0xc000afa0b0) (0xc00092e000) Stream added, broadcasting: 3\nI0513 21:50:37.458955 3161 log.go:172] (0xc000afa0b0) Reply frame received for 3\nI0513 21:50:37.458984 3161 log.go:172] (0xc000afa0b0) (0xc000b02320) Create stream\nI0513 21:50:37.458995 3161 log.go:172] (0xc000afa0b0) (0xc000b02320) Stream added, broadcasting: 5\nI0513 21:50:37.459803 3161 log.go:172] (0xc000afa0b0) Reply frame received for 5\nI0513 21:50:37.551777 3161 log.go:172] (0xc000afa0b0) Data frame received for 5\nI0513 21:50:37.551803 3161 log.go:172] (0xc000b02320) (5) Data frame handling\nI0513 21:50:37.551824 3161 log.go:172] (0xc000b02320) (5) Data frame sent\n+ nslookup clusterip-service\nI0513 21:50:37.558829 3161 log.go:172] (0xc000afa0b0) Data frame received for 3\nI0513 21:50:37.558849 3161 log.go:172] (0xc00092e000) (3) Data frame handling\nI0513 21:50:37.558863 3161 log.go:172] (0xc00092e000) (3) Data frame sent\nI0513 21:50:37.559778 3161 log.go:172] (0xc000afa0b0) Data frame received for 3\nI0513 21:50:37.559791 3161 log.go:172] (0xc00092e000) (3) Data frame handling\nI0513 21:50:37.559802 3161 log.go:172] (0xc00092e000) (3) Data frame sent\nI0513 21:50:37.560268 3161 log.go:172] (0xc000afa0b0) Data frame received for 3\nI0513 21:50:37.560277 3161 log.go:172] (0xc00092e000) (3) Data frame handling\nI0513 21:50:37.560353 3161 log.go:172] (0xc000afa0b0) Data frame received for 5\nI0513 21:50:37.560367 3161 log.go:172] (0xc000b02320) (5) Data frame handling\nI0513 21:50:37.562103 3161 log.go:172] (0xc000afa0b0) Data frame received for 1\nI0513 21:50:37.562132 3161 log.go:172] (0xc000aee0a0) (1) Data frame handling\nI0513 21:50:37.562153 3161 log.go:172] (0xc000aee0a0) (1) Data frame sent\nI0513 21:50:37.562177 3161 log.go:172] (0xc000afa0b0) (0xc000aee0a0) Stream removed, broadcasting: 1\nI0513 21:50:37.562226 3161 log.go:172] (0xc000afa0b0) Go away received\nI0513 21:50:37.562578 3161 log.go:172] (0xc000afa0b0) (0xc000aee0a0) Stream removed, broadcasting: 1\nI0513 21:50:37.562601 3161 log.go:172] (0xc000afa0b0) (0xc00092e000) Stream removed, broadcasting: 3\nI0513 21:50:37.562616 3161 log.go:172] (0xc000afa0b0) (0xc000b02320) Stream removed, broadcasting: 5\n" May 13 21:50:37.567: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9595.svc.cluster.local\tcanonical name = externalsvc.services-9595.svc.cluster.local.\nName:\texternalsvc.services-9595.svc.cluster.local\nAddress: 10.99.37.122\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9595, will wait for the garbage collector to delete the pods May 13 21:50:37.628: INFO: Deleting ReplicationController externalsvc took: 7.763816ms May 13 21:50:37.928: INFO: Terminating ReplicationController externalsvc pods took: 300.242627ms May 13 21:50:49.598: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:50:49.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9595" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.282 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":136,"skipped":2437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:50:49.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:51:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4997" for this suite. • [SLOW TEST:16.309 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":137,"skipped":2460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:51:05.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:51:06.375: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:51:08.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003466, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003466, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003466, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003466, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:51:11.482: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:51:11.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3687-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:51:12.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9553" for this suite. STEP: Destroying namespace "webhook-9553-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.914 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":138,"skipped":2486,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:51:12.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 13 21:51:17.537: INFO: Successfully updated pod "annotationupdate1707397c-7810-4b4b-9e7b-f85011f2481f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:51:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5941" for this suite. • [SLOW TEST:8.703 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2492,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:51:21.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 13 21:51:21.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:21.703: INFO: Number of nodes with available pods: 0 May 13 21:51:21.703: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:22.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:22.709: INFO: Number of nodes with available pods: 0 May 13 21:51:22.709: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:23.707: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:23.710: INFO: Number of nodes with available pods: 0 May 13 21:51:23.710: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:24.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:24.992: INFO: Number of nodes with available pods: 0 May 13 21:51:24.992: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:25.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:25.774: INFO: Number of nodes with available pods: 0 May 13 21:51:25.774: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:26.717: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:26.720: INFO: Number of nodes with available pods: 1 May 13 21:51:26.720: INFO: Node jerma-worker2 is running more than one daemon pod May 13 21:51:27.708: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:27.711: INFO: Number of nodes with available pods: 2 May 13 21:51:27.711: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 13 21:51:27.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:27.798: INFO: Number of nodes with available pods: 1 May 13 21:51:27.798: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:28.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:28.807: INFO: Number of nodes with available pods: 1 May 13 21:51:28.807: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:29.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:29.807: INFO: Number of nodes with available pods: 1 May 13 21:51:29.807: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:30.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:30.807: INFO: Number of nodes with available pods: 1 May 13 21:51:30.807: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:31.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:31.806: INFO: Number of nodes with available pods: 1 May 13 21:51:31.806: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:32.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:32.806: INFO: Number of nodes with available pods: 1 May 13 21:51:32.806: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:33.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:33.805: INFO: Number of nodes with available pods: 1 May 13 21:51:33.805: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:34.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:34.805: INFO: Number of nodes with available pods: 1 May 13 21:51:34.805: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:35.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:35.804: INFO: Number of nodes with available pods: 1 May 13 21:51:35.804: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:36.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:36.806: INFO: Number of nodes with available pods: 1 May 13 21:51:36.806: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:37.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:37.805: INFO: Number of nodes with available pods: 1 May 13 21:51:37.805: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:38.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:38.806: INFO: Number of nodes with available pods: 1 May 13 21:51:38.806: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:39.824: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:39.828: INFO: Number of nodes with available pods: 1 May 13 21:51:39.828: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:40.921: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:40.925: INFO: Number of nodes with available pods: 1 May 13 21:51:40.925: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:41.812: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:41.815: INFO: Number of nodes with available pods: 1 May 13 21:51:41.815: INFO: Node jerma-worker is running more than one daemon pod May 13 21:51:42.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 21:51:42.806: INFO: Number of nodes with available pods: 2 May 13 21:51:42.806: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9647, will wait for the garbage collector to delete the pods May 13 21:51:42.868: INFO: Deleting DaemonSet.extensions daemon-set took: 7.235664ms May 13 21:51:43.168: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.245841ms May 13 21:51:49.272: INFO: Number of nodes with available pods: 0 May 13 21:51:49.272: INFO: Number of running nodes: 0, number of available pods: 0 May 13 21:51:49.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9647/daemonsets","resourceVersion":"15950941"},"items":null} May 13 21:51:49.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9647/pods","resourceVersion":"15950941"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:51:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9647" for this suite. • [SLOW TEST:27.721 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":140,"skipped":2495,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:51:49.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 13 21:51:53.420: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 13 21:51:58.529: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:51:58.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5071" for this suite. • [SLOW TEST:9.252 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":141,"skipped":2507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:51:58.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:52:02.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9958" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2533,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:52:02.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cfb52709-af08-47f3-92a0-4a8813af3b0e STEP: Creating a pod to test consume configMaps May 13 21:52:02.953: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0" in namespace "projected-2083" to be "success or failure" May 13 21:52:02.994: INFO: Pod "pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0": Phase="Pending", Reason="", readiness=false. Elapsed: 41.452764ms May 13 21:52:04.998: INFO: Pod "pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04453577s May 13 21:52:07.001: INFO: Pod "pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04782136s STEP: Saw pod success May 13 21:52:07.001: INFO: Pod "pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0" satisfied condition "success or failure" May 13 21:52:07.003: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0 container projected-configmap-volume-test: STEP: delete the pod May 13 21:52:07.110: INFO: Waiting for pod pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0 to disappear May 13 21:52:07.127: INFO: Pod pod-projected-configmaps-b6cc2a49-267b-4d04-88e7-38f845e457e0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:52:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2083" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2535,"failed":0} ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:52:07.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-34695e0a-0010-4567-8d34-edcbe68e70e8 STEP: Creating secret with name s-test-opt-upd-b0e9d844-eec2-412c-a230-004236ca147c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-34695e0a-0010-4567-8d34-edcbe68e70e8 STEP: Updating secret s-test-opt-upd-b0e9d844-eec2-412c-a230-004236ca147c STEP: Creating secret with name s-test-opt-create-97682ded-b061-4a0c-ae71-a7438438548b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:53:39.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-657" for this suite. • [SLOW TEST:92.710 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2535,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:53:39.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 21:53:40.001: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 13 21:53:45.006: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 21:53:45.006: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 13 21:53:45.107: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1388 /apis/apps/v1/namespaces/deployment-1388/deployments/test-cleanup-deployment 7bd14064-fc20-4099-be21-88c5a9fe6ca9 15951430 1 2020-05-13 21:53:45 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040a8468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 13 21:53:45.112: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-1388 /apis/apps/v1/namespaces/deployment-1388/replicasets/test-cleanup-deployment-55ffc6b7b6 cd158e55-5b83-40ba-b724-f0728f638ea2 15951433 1 2020-05-13 21:53:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7bd14064-fc20-4099-be21-88c5a9fe6ca9 0xc003d967b7 0xc003d967b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d96828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 21:53:45.112: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 13 21:53:45.112: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1388 /apis/apps/v1/namespaces/deployment-1388/replicasets/test-cleanup-controller bd2a979e-381d-475c-afbc-1f001100e39b 15951432 1 2020-05-13 21:53:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7bd14064-fc20-4099-be21-88c5a9fe6ca9 0xc003d966e7 0xc003d966e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003d96748 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 21:53:45.163: INFO: Pod "test-cleanup-controller-ps62g" is available: &Pod{ObjectMeta:{test-cleanup-controller-ps62g test-cleanup-controller- deployment-1388 /api/v1/namespaces/deployment-1388/pods/test-cleanup-controller-ps62g d0c225b7-112d-45e9-b074-8111867f12a6 15951412 0 2020-05-13 21:53:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller bd2a979e-381d-475c-afbc-1f001100e39b 0xc003d96c57 0xc003d96c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6qrvb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6qrvb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6qrvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:53:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:53:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:53:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:53:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.123,StartTime:2020-05-13 21:53:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 21:53:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e7ebf7d594e872d7aef29983f708180f89726afe94e84d559d08da10bce28bb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 21:53:45.164: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-f7gz4" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-f7gz4 test-cleanup-deployment-55ffc6b7b6- deployment-1388 /api/v1/namespaces/deployment-1388/pods/test-cleanup-deployment-55ffc6b7b6-f7gz4 efa5d63d-b238-49ad-b55d-6728fdeb4d8c 15951439 0 2020-05-13 21:53:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 cd158e55-5b83-40ba-b724-f0728f638ea2 0xc003d96de7 0xc003d96de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6qrvb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6qrvb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6qrvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 21:53:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:53:45.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1388" for this suite. • [SLOW TEST:5.389 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":145,"skipped":2554,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:53:45.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 13 21:53:45.329: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:53:59.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9871" for this suite. • [SLOW TEST:14.025 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2558,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:53:59.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-41aa96e8-e03b-442c-bc21-3facd3373758 STEP: Creating a pod to test consume configMaps May 13 21:53:59.440: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50" in namespace "projected-3943" to be "success or failure" May 13 21:53:59.451: INFO: Pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50": Phase="Pending", Reason="", readiness=false. Elapsed: 11.078313ms May 13 21:54:01.455: INFO: Pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01520634s May 13 21:54:03.516: INFO: Pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50": Phase="Running", Reason="", readiness=true. Elapsed: 4.075738163s May 13 21:54:05.519: INFO: Pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078810095s STEP: Saw pod success May 13 21:54:05.519: INFO: Pod "pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50" satisfied condition "success or failure" May 13 21:54:05.521: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50 container projected-configmap-volume-test: STEP: delete the pod May 13 21:54:05.558: INFO: Waiting for pod pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50 to disappear May 13 21:54:05.567: INFO: Pod pod-projected-configmaps-b3617d69-7c5a-453d-ba26-e7e34656cd50 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:54:05.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3943" for this suite. • [SLOW TEST:6.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2564,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:54:05.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-5a25ce96-848e-4531-a876-684078c076fa in namespace container-probe-9174 May 13 21:54:09.668: INFO: Started pod busybox-5a25ce96-848e-4531-a876-684078c076fa in namespace container-probe-9174 STEP: checking the pod's current state and verifying that restartCount is present May 13 21:54:09.671: INFO: Initial restart count of pod busybox-5a25ce96-848e-4531-a876-684078c076fa is 0 May 13 21:55:05.854: INFO: Restart count of pod container-probe-9174/busybox-5a25ce96-848e-4531-a876-684078c076fa is now 1 (56.182909407s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:05.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9174" for this suite. • [SLOW TEST:60.354 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2576,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:05.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 13 21:55:11.992: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7000 PodName:pod-sharedvolume-3a5693ff-97c2-49d9-8f35-0eda90d63307 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 21:55:11.992: INFO: >>> kubeConfig: /root/.kube/config I0513 21:55:12.031182 6 log.go:172] (0xc0025711e0) (0xc0026935e0) Create stream I0513 21:55:12.031215 6 log.go:172] (0xc0025711e0) (0xc0026935e0) Stream added, broadcasting: 1 I0513 21:55:12.033371 6 log.go:172] (0xc0025711e0) Reply frame received for 1 I0513 21:55:12.033423 6 log.go:172] (0xc0025711e0) (0xc0015fbcc0) Create stream I0513 21:55:12.033438 6 log.go:172] (0xc0025711e0) (0xc0015fbcc0) Stream added, broadcasting: 3 I0513 21:55:12.034254 6 log.go:172] (0xc0025711e0) Reply frame received for 3 I0513 21:55:12.034288 6 log.go:172] (0xc0025711e0) (0xc001e2a500) Create stream I0513 21:55:12.034301 6 log.go:172] (0xc0025711e0) (0xc001e2a500) Stream added, broadcasting: 5 I0513 21:55:12.035022 6 log.go:172] (0xc0025711e0) Reply frame received for 5 I0513 21:55:12.112410 6 log.go:172] (0xc0025711e0) Data frame received for 5 I0513 21:55:12.112444 6 log.go:172] (0xc001e2a500) (5) Data frame handling I0513 21:55:12.112468 6 log.go:172] (0xc0025711e0) Data frame received for 3 I0513 21:55:12.112479 6 log.go:172] (0xc0015fbcc0) (3) Data frame handling I0513 21:55:12.112498 6 log.go:172] (0xc0015fbcc0) (3) Data frame sent I0513 21:55:12.112509 6 log.go:172] (0xc0025711e0) Data frame received for 3 I0513 21:55:12.112518 6 log.go:172] (0xc0015fbcc0) (3) Data frame handling I0513 21:55:12.114087 6 log.go:172] (0xc0025711e0) Data frame received for 1 I0513 21:55:12.114114 6 log.go:172] (0xc0026935e0) (1) Data frame handling I0513 21:55:12.114138 6 log.go:172] (0xc0026935e0) (1) Data frame sent I0513 21:55:12.114153 6 log.go:172] (0xc0025711e0) (0xc0026935e0) Stream removed, broadcasting: 1 I0513 21:55:12.114174 6 log.go:172] (0xc0025711e0) Go away received I0513 21:55:12.114391 6 log.go:172] (0xc0025711e0) (0xc0026935e0) Stream removed, broadcasting: 1 I0513 21:55:12.114431 6 log.go:172] (0xc0025711e0) (0xc0015fbcc0) Stream removed, broadcasting: 3 I0513 21:55:12.114456 6 log.go:172] (0xc0025711e0) (0xc001e2a500) Stream removed, broadcasting: 5 May 13 21:55:12.114: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7000" for this suite. • [SLOW TEST:6.194 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":149,"skipped":2578,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:12.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:55:12.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:55:14.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003712, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003712, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003712, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003712, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:55:17.770: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8164" for this suite. STEP: Destroying namespace "webhook-8164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.915 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":150,"skipped":2583,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:18.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 13 21:55:18.098: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 13 21:55:18.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:18.752: INFO: stderr: "" May 13 21:55:18.752: INFO: stdout: "service/agnhost-slave created\n" May 13 21:55:18.752: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 13 21:55:18.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:19.257: INFO: stderr: "" May 13 21:55:19.257: INFO: stdout: "service/agnhost-master created\n" May 13 21:55:19.257: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 13 21:55:19.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:20.071: INFO: stderr: "" May 13 21:55:20.071: INFO: stdout: "service/frontend created\n" May 13 21:55:20.071: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 13 21:55:20.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:20.655: INFO: stderr: "" May 13 21:55:20.655: INFO: stdout: "deployment.apps/frontend created\n" May 13 21:55:20.655: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 13 21:55:20.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:21.027: INFO: stderr: "" May 13 21:55:21.027: INFO: stdout: "deployment.apps/agnhost-master created\n" May 13 21:55:21.028: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 13 21:55:21.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864' May 13 21:55:21.494: INFO: stderr: "" May 13 21:55:21.494: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 13 21:55:21.494: INFO: Waiting for all frontend pods to be Running. May 13 21:55:31.545: INFO: Waiting for frontend to serve content. May 13 21:55:31.553: INFO: Trying to add a new entry to the guestbook. May 13 21:55:31.562: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 13 21:55:31.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:31.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:31.722: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 13 21:55:31.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:31.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:31.890: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 13 21:55:31.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:32.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:32.076: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 13 21:55:32.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:32.189: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:32.189: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 13 21:55:32.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:32.296: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:32.296: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 13 21:55:32.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3864' May 13 21:55:32.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:55:32.413: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:32.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3864" for this suite. • [SLOW TEST:14.386 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":151,"skipped":2597,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:32.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:55:34.671: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:55:36.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003735, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:55:38.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003735, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003734, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:55:41.973: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:42.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6366" for this suite. STEP: Destroying namespace "webhook-6366-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.174 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":152,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:42.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 13 21:55:42.671: INFO: Waiting up to 5m0s for pod "downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa" in namespace "downward-api-3030" to be "success or failure" May 13 21:55:42.725: INFO: Pod "downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa": Phase="Pending", Reason="", readiness=false. Elapsed: 53.714084ms May 13 21:55:44.855: INFO: Pod "downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183458428s May 13 21:55:46.859: INFO: Pod "downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187212719s STEP: Saw pod success May 13 21:55:46.859: INFO: Pod "downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa" satisfied condition "success or failure" May 13 21:55:46.862: INFO: Trying to get logs from node jerma-worker pod downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa container dapi-container: STEP: delete the pod May 13 21:55:46.984: INFO: Waiting for pod downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa to disappear May 13 21:55:46.997: INFO: Pod downward-api-3bd2da80-95cb-42b5-a58f-5fed5e8f3daa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:46.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3030" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2621,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:47.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 13 21:55:47.035: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:55:54.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9191" for this suite. • [SLOW TEST:7.506 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":154,"skipped":2622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:55:54.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 13 21:55:59.069: INFO: Successfully updated pod "adopt-release-lvd94" STEP: Checking that the Job readopts the Pod May 13 21:55:59.069: INFO: Waiting up to 15m0s for pod "adopt-release-lvd94" in namespace "job-8335" to be "adopted" May 13 21:55:59.079: INFO: Pod "adopt-release-lvd94": Phase="Running", Reason="", readiness=true. Elapsed: 9.545096ms May 13 21:56:01.084: INFO: Pod "adopt-release-lvd94": Phase="Running", Reason="", readiness=true. Elapsed: 2.015211443s May 13 21:56:01.084: INFO: Pod "adopt-release-lvd94" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 13 21:56:01.591: INFO: Successfully updated pod "adopt-release-lvd94" STEP: Checking that the Job releases the Pod May 13 21:56:01.591: INFO: Waiting up to 15m0s for pod "adopt-release-lvd94" in namespace "job-8335" to be "released" May 13 21:56:01.610: INFO: Pod "adopt-release-lvd94": Phase="Running", Reason="", readiness=true. Elapsed: 19.042433ms May 13 21:56:03.696: INFO: Pod "adopt-release-lvd94": Phase="Running", Reason="", readiness=true. Elapsed: 2.104987699s May 13 21:56:03.696: INFO: Pod "adopt-release-lvd94" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:03.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8335" for this suite. • [SLOW TEST:9.263 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":155,"skipped":2680,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:03.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:56:05.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:56:07.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:56:09.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725003765, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:56:12.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:12.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2658" for this suite. STEP: Destroying namespace "webhook-2658-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.398 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":156,"skipped":2686,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:13.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-40d39f56-4172-49fa-b90c-f56cc2512e06 STEP: Creating a pod to test consume secrets May 13 21:56:13.293: INFO: Waiting up to 5m0s for pod "pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9" in namespace "secrets-1390" to be "success or failure" May 13 21:56:13.295: INFO: Pod "pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.878851ms May 13 21:56:15.298: INFO: Pod "pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004745096s May 13 21:56:17.373: INFO: Pod "pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07994817s STEP: Saw pod success May 13 21:56:17.373: INFO: Pod "pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9" satisfied condition "success or failure" May 13 21:56:17.382: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9 container secret-volume-test: STEP: delete the pod May 13 21:56:17.421: INFO: Waiting for pod pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9 to disappear May 13 21:56:17.437: INFO: Pod pod-secrets-a9e33f6c-27c2-473b-8bb7-36f45c1db1a9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:17.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1390" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2701,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:17.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1532 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1532 to expose endpoints map[] May 13 21:56:17.589: INFO: Get endpoints failed (18.608113ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 13 21:56:18.594: INFO: successfully validated that service endpoint-test2 in namespace services-1532 exposes endpoints map[] (1.022733752s elapsed) STEP: Creating pod pod1 in namespace services-1532 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1532 to expose endpoints map[pod1:[80]] May 13 21:56:21.767: INFO: successfully validated that service endpoint-test2 in namespace services-1532 exposes endpoints map[pod1:[80]] (3.167137287s elapsed) STEP: Creating pod pod2 in namespace services-1532 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1532 to expose endpoints map[pod1:[80] pod2:[80]] May 13 21:56:24.927: INFO: successfully validated that service endpoint-test2 in namespace services-1532 exposes endpoints map[pod1:[80] pod2:[80]] (3.156758965s elapsed) STEP: Deleting pod pod1 in namespace services-1532 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1532 to expose endpoints map[pod2:[80]] May 13 21:56:26.010: INFO: successfully validated that service endpoint-test2 in namespace services-1532 exposes endpoints map[pod2:[80]] (1.079025732s elapsed) STEP: Deleting pod pod2 in namespace services-1532 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1532 to expose endpoints map[] May 13 21:56:27.071: INFO: successfully validated that service endpoint-test2 in namespace services-1532 exposes endpoints map[] (1.056278654s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:27.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1532" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.667 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":158,"skipped":2705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:27.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-549 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-549 to expose endpoints map[] May 13 21:56:27.264: INFO: Get endpoints failed (53.647382ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 13 21:56:28.267: INFO: successfully validated that service multi-endpoint-test in namespace services-549 exposes endpoints map[] (1.05656578s elapsed) STEP: Creating pod pod1 in namespace services-549 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-549 to expose endpoints map[pod1:[100]] May 13 21:56:32.408: INFO: successfully validated that service multi-endpoint-test in namespace services-549 exposes endpoints map[pod1:[100]] (4.136846531s elapsed) STEP: Creating pod pod2 in namespace services-549 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-549 to expose endpoints map[pod1:[100] pod2:[101]] May 13 21:56:36.507: INFO: successfully validated that service multi-endpoint-test in namespace services-549 exposes endpoints map[pod1:[100] pod2:[101]] (4.089723855s elapsed) STEP: Deleting pod pod1 in namespace services-549 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-549 to expose endpoints map[pod2:[101]] May 13 21:56:37.616: INFO: successfully validated that service multi-endpoint-test in namespace services-549 exposes endpoints map[pod2:[101]] (1.104448252s elapsed) STEP: Deleting pod pod2 in namespace services-549 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-549 to expose endpoints map[] May 13 21:56:38.675: INFO: successfully validated that service multi-endpoint-test in namespace services-549 exposes endpoints map[] (1.05519675s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:38.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-549" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.649 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":159,"skipped":2729,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:38.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 21:56:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8320" for this suite. • [SLOW TEST:17.114 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":160,"skipped":2738,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 21:56:55.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-467 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-467 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-467 May 13 21:56:56.174: INFO: Found 0 stateful pods, waiting for 1 May 13 21:57:06.178: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 13 21:57:06.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:57:06.445: INFO: stderr: "I0513 21:57:06.314585 3440 log.go:172] (0xc0009ba0b0) (0xc000623b80) Create stream\nI0513 21:57:06.314660 3440 log.go:172] (0xc0009ba0b0) (0xc000623b80) Stream added, broadcasting: 1\nI0513 21:57:06.316777 3440 log.go:172] (0xc0009ba0b0) Reply frame received for 1\nI0513 21:57:06.316829 3440 log.go:172] (0xc0009ba0b0) (0xc0009be000) Create stream\nI0513 21:57:06.316853 3440 log.go:172] (0xc0009ba0b0) (0xc0009be000) Stream added, broadcasting: 3\nI0513 21:57:06.317688 3440 log.go:172] (0xc0009ba0b0) Reply frame received for 3\nI0513 21:57:06.317738 3440 log.go:172] (0xc0009ba0b0) (0xc0008f6000) Create stream\nI0513 21:57:06.317770 3440 log.go:172] (0xc0009ba0b0) (0xc0008f6000) Stream added, broadcasting: 5\nI0513 21:57:06.318502 3440 log.go:172] (0xc0009ba0b0) Reply frame received for 5\nI0513 21:57:06.402526 3440 log.go:172] (0xc0009ba0b0) Data frame received for 5\nI0513 21:57:06.402555 3440 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0513 21:57:06.402573 3440 log.go:172] (0xc0008f6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:57:06.438512 3440 log.go:172] (0xc0009ba0b0) Data frame received for 3\nI0513 21:57:06.438531 3440 log.go:172] (0xc0009be000) (3) Data frame handling\nI0513 21:57:06.438557 3440 log.go:172] (0xc0009ba0b0) Data frame received for 5\nI0513 21:57:06.438564 3440 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0513 21:57:06.438591 3440 log.go:172] (0xc0009be000) (3) Data frame sent\nI0513 21:57:06.438655 3440 log.go:172] (0xc0009ba0b0) Data frame received for 3\nI0513 21:57:06.438664 3440 log.go:172] (0xc0009be000) (3) Data frame handling\nI0513 21:57:06.440271 3440 log.go:172] (0xc0009ba0b0) Data frame received for 1\nI0513 21:57:06.440402 3440 log.go:172] (0xc000623b80) (1) Data frame handling\nI0513 21:57:06.440445 3440 log.go:172] (0xc000623b80) (1) Data frame sent\nI0513 21:57:06.440463 3440 log.go:172] (0xc0009ba0b0) (0xc000623b80) Stream removed, broadcasting: 1\nI0513 21:57:06.440485 3440 log.go:172] (0xc0009ba0b0) Go away received\nI0513 21:57:06.440746 3440 log.go:172] (0xc0009ba0b0) (0xc000623b80) Stream removed, broadcasting: 1\nI0513 21:57:06.440758 3440 log.go:172] (0xc0009ba0b0) (0xc0009be000) Stream removed, broadcasting: 3\nI0513 21:57:06.440765 3440 log.go:172] (0xc0009ba0b0) (0xc0008f6000) Stream removed, broadcasting: 5\n" May 13 21:57:06.445: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:57:06.445: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:57:06.448: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 13 21:57:16.452: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 21:57:16.452: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:57:16.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999661s May 13 21:57:17.486: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.977466701s May 13 21:57:18.490: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973762805s May 13 21:57:19.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969956452s May 13 21:57:20.498: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966578044s May 13 21:57:21.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.962271914s May 13 21:57:22.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.957041092s May 13 21:57:23.511: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.953302462s May 13 21:57:24.516: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.948678173s May 13 21:57:25.521: INFO: Verifying statefulset ss doesn't scale past 1 for another 943.683771ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-467 May 13 21:57:26.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:57:26.749: INFO: stderr: "I0513 21:57:26.656906 3462 log.go:172] (0xc000900790) (0xc00068fe00) Create stream\nI0513 21:57:26.656957 3462 log.go:172] (0xc000900790) (0xc00068fe00) Stream added, broadcasting: 1\nI0513 21:57:26.659606 3462 log.go:172] (0xc000900790) Reply frame received for 1\nI0513 21:57:26.659688 3462 log.go:172] (0xc000900790) (0xc00057a6e0) Create stream\nI0513 21:57:26.659724 3462 log.go:172] (0xc000900790) (0xc00057a6e0) Stream added, broadcasting: 3\nI0513 21:57:26.660720 3462 log.go:172] (0xc000900790) Reply frame received for 3\nI0513 21:57:26.660763 3462 log.go:172] (0xc000900790) (0xc00068fea0) Create stream\nI0513 21:57:26.660785 3462 log.go:172] (0xc000900790) (0xc00068fea0) Stream added, broadcasting: 5\nI0513 21:57:26.662136 3462 log.go:172] (0xc000900790) Reply frame received for 5\nI0513 21:57:26.742775 3462 log.go:172] (0xc000900790) Data frame received for 3\nI0513 21:57:26.742809 3462 log.go:172] (0xc00057a6e0) (3) Data frame handling\nI0513 21:57:26.742818 3462 log.go:172] (0xc00057a6e0) (3) Data frame sent\nI0513 21:57:26.742823 3462 log.go:172] (0xc000900790) Data frame received for 3\nI0513 21:57:26.742828 3462 log.go:172] (0xc00057a6e0) (3) Data frame handling\nI0513 21:57:26.742852 3462 log.go:172] (0xc000900790) Data frame received for 5\nI0513 21:57:26.742883 3462 log.go:172] (0xc00068fea0) (5) Data frame handling\nI0513 21:57:26.742910 3462 log.go:172] (0xc00068fea0) (5) Data frame sent\nI0513 21:57:26.742927 3462 log.go:172] (0xc000900790) Data frame received for 5\nI0513 21:57:26.742938 3462 log.go:172] (0xc00068fea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:57:26.744530 3462 log.go:172] (0xc000900790) Data frame received for 1\nI0513 21:57:26.744554 3462 log.go:172] (0xc00068fe00) (1) Data frame handling\nI0513 21:57:26.744580 3462 log.go:172] (0xc00068fe00) (1) Data frame sent\nI0513 21:57:26.744598 3462 log.go:172] (0xc000900790) (0xc00068fe00) Stream removed, broadcasting: 1\nI0513 21:57:26.744750 3462 log.go:172] (0xc000900790) Go away received\nI0513 21:57:26.744980 3462 log.go:172] (0xc000900790) (0xc00068fe00) Stream removed, broadcasting: 1\nI0513 21:57:26.744998 3462 log.go:172] (0xc000900790) (0xc00057a6e0) Stream removed, broadcasting: 3\nI0513 21:57:26.745018 3462 log.go:172] (0xc000900790) (0xc00068fea0) Stream removed, broadcasting: 5\n" May 13 21:57:26.749: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:57:26.749: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:57:26.752: INFO: Found 1 stateful pods, waiting for 3 May 13 21:57:36.756: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 21:57:36.756: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 13 21:57:36.756: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 13 21:57:36.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:57:36.991: INFO: stderr: "I0513 21:57:36.893321 3483 log.go:172] (0xc0008cc630) (0xc000954000) Create stream\nI0513 21:57:36.893384 3483 log.go:172] (0xc0008cc630) (0xc000954000) Stream added, broadcasting: 1\nI0513 21:57:36.895646 3483 log.go:172] (0xc0008cc630) Reply frame received for 1\nI0513 21:57:36.895684 3483 log.go:172] (0xc0008cc630) (0xc0008bc000) Create stream\nI0513 21:57:36.895693 3483 log.go:172] (0xc0008cc630) (0xc0008bc000) Stream added, broadcasting: 3\nI0513 21:57:36.896811 3483 log.go:172] (0xc0008cc630) Reply frame received for 3\nI0513 21:57:36.896868 3483 log.go:172] (0xc0008cc630) (0xc0009540a0) Create stream\nI0513 21:57:36.896886 3483 log.go:172] (0xc0008cc630) (0xc0009540a0) Stream added, broadcasting: 5\nI0513 21:57:36.898061 3483 log.go:172] (0xc0008cc630) Reply frame received for 5\nI0513 21:57:36.984121 3483 log.go:172] (0xc0008cc630) Data frame received for 3\nI0513 21:57:36.984159 3483 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0513 21:57:36.984177 3483 log.go:172] (0xc0008bc000) (3) Data frame sent\nI0513 21:57:36.984193 3483 log.go:172] (0xc0008cc630) Data frame received for 3\nI0513 21:57:36.984209 3483 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0513 21:57:36.984233 3483 log.go:172] (0xc0008cc630) Data frame received for 5\nI0513 21:57:36.984259 3483 log.go:172] (0xc0009540a0) (5) Data frame handling\nI0513 21:57:36.984281 3483 log.go:172] (0xc0009540a0) (5) Data frame sent\nI0513 21:57:36.984291 3483 log.go:172] (0xc0008cc630) Data frame received for 5\nI0513 21:57:36.984300 3483 log.go:172] (0xc0009540a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:57:36.985970 3483 log.go:172] (0xc0008cc630) Data frame received for 1\nI0513 21:57:36.985999 3483 log.go:172] (0xc000954000) (1) Data frame handling\nI0513 21:57:36.986015 3483 log.go:172] (0xc000954000) (1) Data frame sent\nI0513 21:57:36.986027 3483 log.go:172] (0xc0008cc630) (0xc000954000) Stream removed, broadcasting: 1\nI0513 21:57:36.986054 3483 log.go:172] (0xc0008cc630) Go away received\nI0513 21:57:36.986320 3483 log.go:172] (0xc0008cc630) (0xc000954000) Stream removed, broadcasting: 1\nI0513 21:57:36.986335 3483 log.go:172] (0xc0008cc630) (0xc0008bc000) Stream removed, broadcasting: 3\nI0513 21:57:36.986342 3483 log.go:172] (0xc0008cc630) (0xc0009540a0) Stream removed, broadcasting: 5\n" May 13 21:57:36.991: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:57:36.991: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:57:36.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:57:37.219: INFO: stderr: "I0513 21:57:37.120207 3503 log.go:172] (0xc0000f5600) (0xc000a38000) Create stream\nI0513 21:57:37.120263 3503 log.go:172] (0xc0000f5600) (0xc000a38000) Stream added, broadcasting: 1\nI0513 21:57:37.123315 3503 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0513 21:57:37.123364 3503 log.go:172] (0xc0000f5600) (0xc000736000) Create stream\nI0513 21:57:37.123383 3503 log.go:172] (0xc0000f5600) (0xc000736000) Stream added, broadcasting: 3\nI0513 21:57:37.124422 3503 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0513 21:57:37.124461 3503 log.go:172] (0xc0000f5600) (0xc000736140) Create stream\nI0513 21:57:37.124471 3503 log.go:172] (0xc0000f5600) (0xc000736140) Stream added, broadcasting: 5\nI0513 21:57:37.125625 3503 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0513 21:57:37.182722 3503 log.go:172] (0xc0000f5600) Data frame received for 5\nI0513 21:57:37.182747 3503 log.go:172] (0xc000736140) (5) Data frame handling\nI0513 21:57:37.182763 3503 log.go:172] (0xc000736140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:57:37.209575 3503 log.go:172] (0xc0000f5600) Data frame received for 5\nI0513 21:57:37.209622 3503 log.go:172] (0xc000736140) (5) Data frame handling\nI0513 21:57:37.209656 3503 log.go:172] (0xc0000f5600) Data frame received for 3\nI0513 21:57:37.209671 3503 log.go:172] (0xc000736000) (3) Data frame handling\nI0513 21:57:37.209692 3503 log.go:172] (0xc000736000) (3) Data frame sent\nI0513 21:57:37.209812 3503 log.go:172] (0xc0000f5600) Data frame received for 3\nI0513 21:57:37.209837 3503 log.go:172] (0xc000736000) (3) Data frame handling\nI0513 21:57:37.212156 3503 log.go:172] (0xc0000f5600) Data frame received for 1\nI0513 21:57:37.212180 3503 log.go:172] (0xc000a38000) (1) Data frame handling\nI0513 21:57:37.212203 3503 log.go:172] (0xc000a38000) (1) Data frame sent\nI0513 21:57:37.212235 3503 log.go:172] (0xc0000f5600) (0xc000a38000) Stream removed, broadcasting: 1\nI0513 21:57:37.212285 3503 log.go:172] (0xc0000f5600) Go away received\nI0513 21:57:37.212643 3503 log.go:172] (0xc0000f5600) (0xc000a38000) Stream removed, broadcasting: 1\nI0513 21:57:37.212674 3503 log.go:172] (0xc0000f5600) (0xc000736000) Stream removed, broadcasting: 3\nI0513 21:57:37.212692 3503 log.go:172] (0xc0000f5600) (0xc000736140) Stream removed, broadcasting: 5\n" May 13 21:57:37.220: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:57:37.220: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:57:37.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 21:57:37.537: INFO: stderr: "I0513 21:57:37.356885 3526 log.go:172] (0xc0000f4a50) (0xc0008f8000) Create stream\nI0513 21:57:37.356947 3526 log.go:172] (0xc0000f4a50) (0xc0008f8000) Stream added, broadcasting: 1\nI0513 21:57:37.359089 3526 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0513 21:57:37.359125 3526 log.go:172] (0xc0000f4a50) (0xc000693a40) Create stream\nI0513 21:57:37.359144 3526 log.go:172] (0xc0000f4a50) (0xc000693a40) Stream added, broadcasting: 3\nI0513 21:57:37.360044 3526 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0513 21:57:37.360103 3526 log.go:172] (0xc0000f4a50) (0xc0008f80a0) Create stream\nI0513 21:57:37.360131 3526 log.go:172] (0xc0000f4a50) (0xc0008f80a0) Stream added, broadcasting: 5\nI0513 21:57:37.361019 3526 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0513 21:57:37.500570 3526 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0513 21:57:37.500601 3526 log.go:172] (0xc0008f80a0) (5) Data frame handling\nI0513 21:57:37.500616 3526 log.go:172] (0xc0008f80a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0513 21:57:37.527049 3526 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0513 21:57:37.527071 3526 log.go:172] (0xc000693a40) (3) Data frame handling\nI0513 21:57:37.527090 3526 log.go:172] (0xc000693a40) (3) Data frame sent\nI0513 21:57:37.527103 3526 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0513 21:57:37.527134 3526 log.go:172] (0xc000693a40) (3) Data frame handling\nI0513 21:57:37.527339 3526 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0513 21:57:37.527350 3526 log.go:172] (0xc0008f80a0) (5) Data frame handling\nI0513 21:57:37.529284 3526 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0513 21:57:37.529324 3526 log.go:172] (0xc0008f8000) (1) Data frame handling\nI0513 21:57:37.529355 3526 log.go:172] (0xc0008f8000) (1) Data frame sent\nI0513 21:57:37.529378 3526 log.go:172] (0xc0000f4a50) (0xc0008f8000) Stream removed, broadcasting: 1\nI0513 21:57:37.529397 3526 log.go:172] (0xc0000f4a50) Go away received\nI0513 21:57:37.529964 3526 log.go:172] (0xc0000f4a50) (0xc0008f8000) Stream removed, broadcasting: 1\nI0513 21:57:37.529998 3526 log.go:172] (0xc0000f4a50) (0xc000693a40) Stream removed, broadcasting: 3\nI0513 21:57:37.530018 3526 log.go:172] (0xc0000f4a50) (0xc0008f80a0) Stream removed, broadcasting: 5\n" May 13 21:57:37.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 21:57:37.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 21:57:37.538: INFO: Waiting for statefulset status.replicas updated to 0 May 13 21:57:37.540: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 13 21:57:47.549: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 21:57:47.549: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 13 21:57:47.549: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 13 21:57:47.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999645s May 13 21:57:48.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987501687s May 13 21:57:49.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982022042s May 13 21:57:50.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976894299s May 13 21:57:51.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965550536s May 13 21:57:52.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960337396s May 13 21:57:53.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956224072s May 13 21:57:54.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950666705s May 13 21:57:55.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945344458s May 13 21:57:56.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 941.954442ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-467 May 13 21:57:57.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:57:57.865: INFO: stderr: "I0513 21:57:57.799136 3549 log.go:172] (0xc000af3130) (0xc000ad25a0) Create stream\nI0513 21:57:57.799168 3549 log.go:172] (0xc000af3130) (0xc000ad25a0) Stream added, broadcasting: 1\nI0513 21:57:57.800365 3549 log.go:172] (0xc000af3130) Reply frame received for 1\nI0513 21:57:57.800396 3549 log.go:172] (0xc000af3130) (0xc000a843c0) Create stream\nI0513 21:57:57.800408 3549 log.go:172] (0xc000af3130) (0xc000a843c0) Stream added, broadcasting: 3\nI0513 21:57:57.801044 3549 log.go:172] (0xc000af3130) Reply frame received for 3\nI0513 21:57:57.801077 3549 log.go:172] (0xc000af3130) (0xc000ad2640) Create stream\nI0513 21:57:57.801089 3549 log.go:172] (0xc000af3130) (0xc000ad2640) Stream added, broadcasting: 5\nI0513 21:57:57.801690 3549 log.go:172] (0xc000af3130) Reply frame received for 5\nI0513 21:57:57.858358 3549 log.go:172] (0xc000af3130) Data frame received for 5\nI0513 21:57:57.858399 3549 log.go:172] (0xc000ad2640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:57:57.858424 3549 log.go:172] (0xc000af3130) Data frame received for 3\nI0513 21:57:57.858457 3549 log.go:172] (0xc000a843c0) (3) Data frame handling\nI0513 21:57:57.858469 3549 log.go:172] (0xc000a843c0) (3) Data frame sent\nI0513 21:57:57.858485 3549 log.go:172] (0xc000af3130) Data frame received for 3\nI0513 21:57:57.858498 3549 log.go:172] (0xc000a843c0) (3) Data frame handling\nI0513 21:57:57.858528 3549 log.go:172] (0xc000ad2640) (5) Data frame sent\nI0513 21:57:57.858543 3549 log.go:172] (0xc000af3130) Data frame received for 5\nI0513 21:57:57.858551 3549 log.go:172] (0xc000ad2640) (5) Data frame handling\nI0513 21:57:57.859557 3549 log.go:172] (0xc000af3130) Data frame received for 1\nI0513 21:57:57.859577 3549 log.go:172] (0xc000ad25a0) (1) Data frame handling\nI0513 21:57:57.859588 3549 log.go:172] (0xc000ad25a0) (1) Data frame sent\nI0513 21:57:57.859602 3549 log.go:172] (0xc000af3130) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0513 21:57:57.859742 3549 log.go:172] (0xc000af3130) Go away received\nI0513 21:57:57.860303 3549 log.go:172] (0xc000af3130) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0513 21:57:57.860341 3549 log.go:172] (0xc000af3130) (0xc000a843c0) Stream removed, broadcasting: 3\nI0513 21:57:57.860364 3549 log.go:172] (0xc000af3130) (0xc000ad2640) Stream removed, broadcasting: 5\n" May 13 21:57:57.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:57:57.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:57:57.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:57:58.042: INFO: stderr: "I0513 21:57:57.978264 3569 log.go:172] (0xc000210dc0) (0xc0009ec000) Create stream\nI0513 21:57:57.978307 3569 log.go:172] (0xc000210dc0) (0xc0009ec000) Stream added, broadcasting: 1\nI0513 21:57:57.980867 3569 log.go:172] (0xc000210dc0) Reply frame received for 1\nI0513 21:57:57.980892 3569 log.go:172] (0xc000210dc0) (0xc00068ba40) Create stream\nI0513 21:57:57.980900 3569 log.go:172] (0xc000210dc0) (0xc00068ba40) Stream added, broadcasting: 3\nI0513 21:57:57.981896 3569 log.go:172] (0xc000210dc0) Reply frame received for 3\nI0513 21:57:57.981938 3569 log.go:172] (0xc000210dc0) (0xc0003c0000) Create stream\nI0513 21:57:57.981956 3569 log.go:172] (0xc000210dc0) (0xc0003c0000) Stream added, broadcasting: 5\nI0513 21:57:57.982819 3569 log.go:172] (0xc000210dc0) Reply frame received for 5\nI0513 21:57:58.037766 3569 log.go:172] (0xc000210dc0) Data frame received for 5\nI0513 21:57:58.037784 3569 log.go:172] (0xc0003c0000) (5) Data frame handling\nI0513 21:57:58.037790 3569 log.go:172] (0xc0003c0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0513 21:57:58.037831 3569 log.go:172] (0xc000210dc0) Data frame received for 3\nI0513 21:57:58.037893 3569 log.go:172] (0xc00068ba40) (3) Data frame handling\nI0513 21:57:58.037921 3569 log.go:172] (0xc00068ba40) (3) Data frame sent\nI0513 21:57:58.037938 3569 log.go:172] (0xc000210dc0) Data frame received for 3\nI0513 21:57:58.037950 3569 log.go:172] (0xc00068ba40) (3) Data frame handling\nI0513 21:57:58.037974 3569 log.go:172] (0xc000210dc0) Data frame received for 5\nI0513 21:57:58.038004 3569 log.go:172] (0xc0003c0000) (5) Data frame handling\nI0513 21:57:58.038361 3569 log.go:172] (0xc000210dc0) Data frame received for 1\nI0513 21:57:58.038373 3569 log.go:172] (0xc0009ec000) (1) Data frame handling\nI0513 21:57:58.038382 3569 log.go:172] (0xc0009ec000) (1) Data frame sent\nI0513 21:57:58.038398 3569 log.go:172] (0xc000210dc0) (0xc0009ec000) Stream removed, broadcasting: 1\nI0513 21:57:58.038409 3569 log.go:172] (0xc000210dc0) Go away received\nI0513 21:57:58.038717 3569 log.go:172] (0xc000210dc0) (0xc0009ec000) Stream removed, broadcasting: 1\nI0513 21:57:58.038726 3569 log.go:172] (0xc000210dc0) (0xc00068ba40) Stream removed, broadcasting: 3\nI0513 21:57:58.038731 3569 log.go:172] (0xc000210dc0) (0xc0003c0000) Stream removed, broadcasting: 5\n" May 13 21:57:58.042: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 21:57:58.043: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 21:57:58.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:57:58.275: INFO: rc: 1 May 13 21:57:58.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0513 21:57:58.218800 3588 log.go:172] (0xc000a47760) (0xc000c02280) Create stream I0513 21:57:58.218847 3588 log.go:172] (0xc000a47760) (0xc000c02280) Stream added, broadcasting: 1 I0513 21:57:58.221611 3588 log.go:172] (0xc000a47760) Reply frame received for 1 I0513 21:57:58.221644 3588 log.go:172] (0xc000a47760) (0xc000a28140) Create stream I0513 21:57:58.221652 3588 log.go:172] (0xc000a47760) (0xc000a28140) Stream added, broadcasting: 3 I0513 21:57:58.222262 3588 log.go:172] (0xc000a47760) Reply frame received for 3 I0513 21:57:58.222284 3588 log.go:172] (0xc000a47760) (0xc000a281e0) Create stream I0513 21:57:58.222291 3588 log.go:172] (0xc000a47760) (0xc000a281e0) Stream added, broadcasting: 5 I0513 21:57:58.222857 3588 log.go:172] (0xc000a47760) Reply frame received for 5 I0513 21:57:58.270806 3588 log.go:172] (0xc000a47760) Data frame received for 1 I0513 21:57:58.270832 3588 log.go:172] (0xc000c02280) (1) Data frame handling I0513 21:57:58.270855 3588 log.go:172] (0xc000c02280) (1) Data frame sent I0513 21:57:58.270889 3588 log.go:172] (0xc000a47760) (0xc000a28140) Stream removed, broadcasting: 3 I0513 21:57:58.270929 3588 log.go:172] (0xc000a47760) (0xc000c02280) Stream removed, broadcasting: 1 I0513 21:57:58.270961 3588 log.go:172] (0xc000a47760) (0xc000a281e0) Stream removed, broadcasting: 5 I0513 21:57:58.270992 3588 log.go:172] (0xc000a47760) Go away received I0513 21:57:58.271374 3588 log.go:172] (0xc000a47760) (0xc000c02280) Stream removed, broadcasting: 1 I0513 21:57:58.271402 3588 log.go:172] (0xc000a47760) (0xc000a28140) Stream removed, broadcasting: 3 I0513 21:57:58.271411 3588 log.go:172] (0xc000a47760) (0xc000a281e0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "4bde810d30ec0858be166fa3610415359d6f4f5c1f04ecb182e622fd1f3f1199": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown error: exit status 1 May 13 21:58:08.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:08.402: INFO: rc: 1 May 13 21:58:08.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 13 21:58:18.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:18.508: INFO: rc: 1 May 13 21:58:18.508: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:58:28.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:28.617: INFO: rc: 1 May 13 21:58:28.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:58:38.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:38.715: INFO: rc: 1 May 13 21:58:38.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:58:48.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:48.817: INFO: rc: 1 May 13 21:58:48.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:58:58.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:58:58.921: INFO: rc: 1 May 13 21:58:58.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:59:08.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:59:09.020: INFO: rc: 1 May 13 21:59:09.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:59:19.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:59:19.130: INFO: rc: 1 May 13 21:59:19.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:59:29.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:59:29.267: INFO: rc: 1 May 13 21:59:29.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:59:39.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:59:39.367: INFO: rc: 1 May 13 21:59:39.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 21:59:49.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 21:59:52.713: INFO: rc: 1 May 13 21:59:52.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:02.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:02.820: INFO: rc: 1 May 13 22:00:02.820: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:12.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:12.936: INFO: rc: 1 May 13 22:00:12.936: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:22.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:23.039: INFO: rc: 1 May 13 22:00:23.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:33.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:33.147: INFO: rc: 1 May 13 22:00:33.147: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:43.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:43.257: INFO: rc: 1 May 13 22:00:43.257: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:00:53.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:00:53.348: INFO: rc: 1 May 13 22:00:53.348: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:03.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:03.450: INFO: rc: 1 May 13 22:01:03.450: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:13.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:13.545: INFO: rc: 1 May 13 22:01:13.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:23.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:23.635: INFO: rc: 1 May 13 22:01:23.635: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:33.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:33.730: INFO: rc: 1 May 13 22:01:33.730: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:43.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:43.852: INFO: rc: 1 May 13 22:01:43.852: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:01:53.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:01:53.964: INFO: rc: 1 May 13 22:01:53.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:03.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:04.086: INFO: rc: 1 May 13 22:02:04.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:14.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:14.189: INFO: rc: 1 May 13 22:02:14.189: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:24.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:24.298: INFO: rc: 1 May 13 22:02:24.298: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:34.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:34.396: INFO: rc: 1 May 13 22:02:34.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:44.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:44.495: INFO: rc: 1 May 13 22:02:44.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:02:54.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:02:54.603: INFO: rc: 1 May 13 22:02:54.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 13 22:03:04.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:03:04.706: INFO: rc: 1 May 13 22:03:04.706: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: May 13 22:03:04.706: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 22:03:04.717: INFO: Deleting all statefulset in ns statefulset-467 May 13 22:03:04.719: INFO: Scaling statefulset ss to 0 May 13 22:03:04.727: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:03:04.729: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:03:04.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-467" for this suite. • [SLOW TEST:368.880 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":161,"skipped":2753,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:03:04.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 13 22:03:09.385: INFO: Successfully updated pod "labelsupdateff8266a8-af4a-4c1e-89c3-3d0054ce2091" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:03:11.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6254" for this suite. • [SLOW TEST:6.721 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2759,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:03:11.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:03:15.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2329" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":163,"skipped":2760,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:03:15.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:03:48.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2623" for this suite. STEP: Destroying namespace "nsdeletetest-9992" for this suite. May 13 22:03:48.734: INFO: Namespace nsdeletetest-9992 was already deleted STEP: Destroying namespace "nsdeletetest-7626" for this suite. • [SLOW TEST:32.758 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":164,"skipped":2781,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:03:48.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:03:48.781: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:03:49.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9742" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":165,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:03:49.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:03:50.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 13 22:03:50.645: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:03:50Z generation:1 name:name1 resourceVersion:15954346 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eee60337-8e14-464d-97fe-718bee529f42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 13 22:04:00.651: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:04:00Z generation:1 name:name2 resourceVersion:15954390 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:61aaa3b2-4613-4ab9-b0e4-776133c2e43d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 13 22:04:10.656: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:03:50Z generation:2 name:name1 resourceVersion:15954420 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eee60337-8e14-464d-97fe-718bee529f42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 13 22:04:20.662: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:04:00Z generation:2 name:name2 resourceVersion:15954450 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:61aaa3b2-4613-4ab9-b0e4-776133c2e43d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 13 22:04:30.670: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:03:50Z generation:2 name:name1 resourceVersion:15954480 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eee60337-8e14-464d-97fe-718bee529f42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 13 22:04:40.679: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-13T22:04:00Z generation:2 name:name2 resourceVersion:15954510 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:61aaa3b2-4613-4ab9-b0e4-776133c2e43d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:04:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1947" for this suite. • [SLOW TEST:61.241 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":166,"skipped":2801,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:04:51.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-1266ac51-fa24-4690-9c25-967e0b3278d0 STEP: Creating a pod to test consume configMaps May 13 22:04:51.325: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635" in namespace "configmap-9884" to be "success or failure" May 13 22:04:51.328: INFO: Pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635": Phase="Pending", Reason="", readiness=false. Elapsed: 3.089087ms May 13 22:04:53.332: INFO: Pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006847791s May 13 22:04:55.336: INFO: Pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635": Phase="Running", Reason="", readiness=true. Elapsed: 4.011298548s May 13 22:04:57.346: INFO: Pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02121563s STEP: Saw pod success May 13 22:04:57.346: INFO: Pod "pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635" satisfied condition "success or failure" May 13 22:04:57.349: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635 container configmap-volume-test: STEP: delete the pod May 13 22:04:57.371: INFO: Waiting for pod pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635 to disappear May 13 22:04:57.396: INFO: Pod pod-configmaps-3a65deb2-bcdf-4adf-9b8e-425ae020b635 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:04:57.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9884" for this suite. • [SLOW TEST:6.205 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2809,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:04:57.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:04:57.491: INFO: Create a RollingUpdate DaemonSet May 13 22:04:57.493: INFO: Check that daemon pods launch on every node of the cluster May 13 22:04:57.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:04:57.556: INFO: Number of nodes with available pods: 0 May 13 22:04:57.556: INFO: Node jerma-worker is running more than one daemon pod May 13 22:04:58.559: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:04:58.562: INFO: Number of nodes with available pods: 0 May 13 22:04:58.562: INFO: Node jerma-worker is running more than one daemon pod May 13 22:04:59.561: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:04:59.785: INFO: Number of nodes with available pods: 0 May 13 22:04:59.785: INFO: Node jerma-worker is running more than one daemon pod May 13 22:05:00.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:00.772: INFO: Number of nodes with available pods: 0 May 13 22:05:00.772: INFO: Node jerma-worker is running more than one daemon pod May 13 22:05:01.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:01.617: INFO: Number of nodes with available pods: 0 May 13 22:05:01.617: INFO: Node jerma-worker is running more than one daemon pod May 13 22:05:02.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:02.597: INFO: Number of nodes with available pods: 2 May 13 22:05:02.597: INFO: Number of running nodes: 2, number of available pods: 2 May 13 22:05:02.597: INFO: Update the DaemonSet to trigger a rollout May 13 22:05:02.641: INFO: Updating DaemonSet daemon-set May 13 22:05:07.692: INFO: Roll back the DaemonSet before rollout is complete May 13 22:05:07.699: INFO: Updating DaemonSet daemon-set May 13 22:05:07.699: INFO: Make sure DaemonSet rollback is complete May 13 22:05:07.722: INFO: Wrong image for pod: daemon-set-rhf86. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 13 22:05:07.722: INFO: Pod daemon-set-rhf86 is not available May 13 22:05:07.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:08.742: INFO: Wrong image for pod: daemon-set-rhf86. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 13 22:05:08.742: INFO: Pod daemon-set-rhf86 is not available May 13 22:05:08.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:09.990: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:05:10.741: INFO: Pod daemon-set-5kp75 is not available May 13 22:05:10.745: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5549, will wait for the garbage collector to delete the pods May 13 22:05:10.810: INFO: Deleting DaemonSet.extensions daemon-set took: 6.960835ms May 13 22:05:11.110: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.202759ms May 13 22:05:19.529: INFO: Number of nodes with available pods: 0 May 13 22:05:19.529: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:05:19.532: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5549/daemonsets","resourceVersion":"15954736"},"items":null} May 13 22:05:19.534: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5549/pods","resourceVersion":"15954736"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:19.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5549" for this suite. • [SLOW TEST:22.148 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":168,"skipped":2820,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:19.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 13 22:05:19.664: INFO: Waiting up to 5m0s for pod "var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062" in namespace "var-expansion-4392" to be "success or failure" May 13 22:05:19.670: INFO: Pod "var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695071ms May 13 22:05:21.674: INFO: Pod "var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010614729s May 13 22:05:23.678: INFO: Pod "var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014514418s STEP: Saw pod success May 13 22:05:23.678: INFO: Pod "var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062" satisfied condition "success or failure" May 13 22:05:23.681: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062 container dapi-container: STEP: delete the pod May 13 22:05:23.920: INFO: Waiting for pod var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062 to disappear May 13 22:05:23.940: INFO: Pod var-expansion-ee0c1591-6bae-4bfc-bcfb-054f3e97e062 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:23.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4392" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2822,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:23.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:37.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5849" for this suite. • [SLOW TEST:13.281 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":170,"skipped":2825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:37.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-1dcfca0f-f615-4a87-871f-ef04452b5322 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:37.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4400" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":171,"skipped":2853,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:37.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:05:37.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9603' May 13 22:05:37.779: INFO: stderr: "" May 13 22:05:37.779: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 13 22:05:37.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9603' May 13 22:05:38.156: INFO: stderr: "" May 13 22:05:38.156: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 13 22:05:39.160: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:05:39.160: INFO: Found 0 / 1 May 13 22:05:40.160: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:05:40.160: INFO: Found 0 / 1 May 13 22:05:41.160: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:05:41.160: INFO: Found 0 / 1 May 13 22:05:42.160: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:05:42.161: INFO: Found 1 / 1 May 13 22:05:42.161: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 13 22:05:42.164: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:05:42.164: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 22:05:42.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-2kv6p --namespace=kubectl-9603' May 13 22:05:42.277: INFO: stderr: "" May 13 22:05:42.277: INFO: stdout: "Name: agnhost-master-2kv6p\nNamespace: kubectl-9603\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Wed, 13 May 2020 22:05:37 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.144\nIPs:\n IP: 10.244.1.144\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://1745eeb252fd84dc03cfd1609f52eec63bae85a41c3191d8a2de4aba02191bde\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 13 May 2020 22:05:40 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-728lz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-728lz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-728lz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-9603/agnhost-master-2kv6p to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" May 13 22:05:42.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9603' May 13 22:05:42.411: INFO: stderr: "" May 13 22:05:42.411: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9603\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-2kv6p\n" May 13 22:05:42.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9603' May 13 22:05:42.554: INFO: stderr: "" May 13 22:05:42.554: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9603\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.83.71\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.144:6379\nSession Affinity: None\nEvents: \n" May 13 22:05:42.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 13 22:05:42.681: INFO: stderr: "" May 13 22:05:42.681: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 13 May 2020 22:05:41 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 13 May 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 13 May 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 13 May 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 13 May 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 59d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 59d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 59d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 13 22:05:42.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9603' May 13 22:05:42.787: INFO: stderr: "" May 13 22:05:42.787: INFO: stdout: "Name: kubectl-9603\nLabels: e2e-framework=kubectl\n e2e-run=d033ebe1-a1df-4403-9844-2873134c9854\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:42.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9603" for this suite. • [SLOW TEST:5.417 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":172,"skipped":2855,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:42.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 13 22:05:42.940: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-resource-version e0a459c6-206d-4fcf-bf42-4dcb75c07524 15954919 0 2020-05-13 22:05:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 13 22:05:42.941: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-resource-version e0a459c6-206d-4fcf-bf42-4dcb75c07524 15954920 0 2020-05-13 22:05:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:42.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-856" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":173,"skipped":2869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:42.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 13 22:05:43.034: INFO: Created pod &Pod{ObjectMeta:{dns-7452 dns-7452 /api/v1/namespaces/dns-7452/pods/dns-7452 12a17c62-52d3-40eb-b123-517a0d8d4063 15954926 0 2020-05-13 22:05:43 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nq2m8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nq2m8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nq2m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 13 22:05:47.042: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7452 PodName:dns-7452 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:05:47.042: INFO: >>> kubeConfig: /root/.kube/config I0513 22:05:47.066861 6 log.go:172] (0xc002757ce0) (0xc002235680) Create stream I0513 22:05:47.066887 6 log.go:172] (0xc002757ce0) (0xc002235680) Stream added, broadcasting: 1 I0513 22:05:47.068295 6 log.go:172] (0xc002757ce0) Reply frame received for 1 I0513 22:05:47.068328 6 log.go:172] (0xc002757ce0) (0xc00215b220) Create stream I0513 22:05:47.068339 6 log.go:172] (0xc002757ce0) (0xc00215b220) Stream added, broadcasting: 3 I0513 22:05:47.068969 6 log.go:172] (0xc002757ce0) Reply frame received for 3 I0513 22:05:47.069010 6 log.go:172] (0xc002757ce0) (0xc002693ae0) Create stream I0513 22:05:47.069020 6 log.go:172] (0xc002757ce0) (0xc002693ae0) Stream added, broadcasting: 5 I0513 22:05:47.069850 6 log.go:172] (0xc002757ce0) Reply frame received for 5 I0513 22:05:47.170898 6 log.go:172] (0xc002757ce0) Data frame received for 3 I0513 22:05:47.170927 6 log.go:172] (0xc00215b220) (3) Data frame handling I0513 22:05:47.170946 6 log.go:172] (0xc00215b220) (3) Data frame sent I0513 22:05:47.171774 6 log.go:172] (0xc002757ce0) Data frame received for 3 I0513 22:05:47.171825 6 log.go:172] (0xc00215b220) (3) Data frame handling I0513 22:05:47.171868 6 log.go:172] (0xc002757ce0) Data frame received for 5 I0513 22:05:47.171889 6 log.go:172] (0xc002693ae0) (5) Data frame handling I0513 22:05:47.173910 6 log.go:172] (0xc002757ce0) Data frame received for 1 I0513 22:05:47.173937 6 log.go:172] (0xc002235680) (1) Data frame handling I0513 22:05:47.173959 6 log.go:172] (0xc002235680) (1) Data frame sent I0513 22:05:47.173975 6 log.go:172] (0xc002757ce0) (0xc002235680) Stream removed, broadcasting: 1 I0513 22:05:47.174000 6 log.go:172] (0xc002757ce0) Go away received I0513 22:05:47.174138 6 log.go:172] (0xc002757ce0) (0xc002235680) Stream removed, broadcasting: 1 I0513 22:05:47.174157 6 log.go:172] (0xc002757ce0) (0xc00215b220) Stream removed, broadcasting: 3 I0513 22:05:47.174175 6 log.go:172] (0xc002757ce0) (0xc002693ae0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 13 22:05:47.174: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7452 PodName:dns-7452 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:05:47.174: INFO: >>> kubeConfig: /root/.kube/config I0513 22:05:47.198475 6 log.go:172] (0xc001f82370) (0xc002972280) Create stream I0513 22:05:47.198498 6 log.go:172] (0xc001f82370) (0xc002972280) Stream added, broadcasting: 1 I0513 22:05:47.200924 6 log.go:172] (0xc001f82370) Reply frame received for 1 I0513 22:05:47.200957 6 log.go:172] (0xc001f82370) (0xc0029726e0) Create stream I0513 22:05:47.200972 6 log.go:172] (0xc001f82370) (0xc0029726e0) Stream added, broadcasting: 3 I0513 22:05:47.201992 6 log.go:172] (0xc001f82370) Reply frame received for 3 I0513 22:05:47.202025 6 log.go:172] (0xc001f82370) (0xc002693b80) Create stream I0513 22:05:47.202036 6 log.go:172] (0xc001f82370) (0xc002693b80) Stream added, broadcasting: 5 I0513 22:05:47.202835 6 log.go:172] (0xc001f82370) Reply frame received for 5 I0513 22:05:47.312497 6 log.go:172] (0xc001f82370) Data frame received for 3 I0513 22:05:47.312529 6 log.go:172] (0xc0029726e0) (3) Data frame handling I0513 22:05:47.312554 6 log.go:172] (0xc0029726e0) (3) Data frame sent I0513 22:05:47.315455 6 log.go:172] (0xc001f82370) Data frame received for 5 I0513 22:05:47.315496 6 log.go:172] (0xc002693b80) (5) Data frame handling I0513 22:05:47.315518 6 log.go:172] (0xc001f82370) Data frame received for 3 I0513 22:05:47.315533 6 log.go:172] (0xc0029726e0) (3) Data frame handling I0513 22:05:47.317592 6 log.go:172] (0xc001f82370) Data frame received for 1 I0513 22:05:47.317632 6 log.go:172] (0xc002972280) (1) Data frame handling I0513 22:05:47.317658 6 log.go:172] (0xc002972280) (1) Data frame sent I0513 22:05:47.317690 6 log.go:172] (0xc001f82370) (0xc002972280) Stream removed, broadcasting: 1 I0513 22:05:47.317733 6 log.go:172] (0xc001f82370) Go away received I0513 22:05:47.317862 6 log.go:172] (0xc001f82370) (0xc002972280) Stream removed, broadcasting: 1 I0513 22:05:47.317904 6 log.go:172] (0xc001f82370) (0xc0029726e0) Stream removed, broadcasting: 3 I0513 22:05:47.317928 6 log.go:172] (0xc001f82370) (0xc002693b80) Stream removed, broadcasting: 5 May 13 22:05:47.317: INFO: Deleting pod dns-7452... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:47.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7452" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":174,"skipped":2929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:47.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 22:05:53.263: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:05:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3740" for this suite. • [SLOW TEST:5.893 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2990,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:05:53.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 13 22:05:58.031: INFO: Successfully updated pod "annotationupdate1778d5ca-8fad-42ea-9ad6-d5ea9672ba4e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:06:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6393" for this suite. • [SLOW TEST:8.710 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2996,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:06:02.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6034, will wait for the garbage collector to delete the pods May 13 22:06:08.233: INFO: Deleting Job.batch foo took: 6.580702ms May 13 22:06:08.533: INFO: Terminating Job.batch foo pods took: 300.329831ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:06:49.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6034" for this suite. • [SLOW TEST:47.264 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":177,"skipped":3001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:06:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0513 22:06:59.440006 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 22:06:59.440: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:06:59.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2977" for this suite. • [SLOW TEST:10.102 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":178,"skipped":3047,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:06:59.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:07:10.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1067" for this suite. • [SLOW TEST:11.250 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":179,"skipped":3060,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:07:10.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 13 22:07:10.746: INFO: Waiting up to 5m0s for pod "pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce" in namespace "emptydir-2820" to be "success or failure" May 13 22:07:10.800: INFO: Pod "pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce": Phase="Pending", Reason="", readiness=false. Elapsed: 53.462541ms May 13 22:07:12.866: INFO: Pod "pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119229706s May 13 22:07:14.870: INFO: Pod "pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123107487s STEP: Saw pod success May 13 22:07:14.870: INFO: Pod "pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce" satisfied condition "success or failure" May 13 22:07:14.872: INFO: Trying to get logs from node jerma-worker pod pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce container test-container: STEP: delete the pod May 13 22:07:14.975: INFO: Waiting for pod pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce to disappear May 13 22:07:14.986: INFO: Pod pod-899c65ea-affa-4b3b-a2c0-0cd4092a1bce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:07:14.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2820" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3061,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:07:14.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:07:15.299: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:07:16.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2424" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":181,"skipped":3063,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:07:16.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:07:16.629: INFO: Creating deployment "webserver-deployment" May 13 22:07:16.634: INFO: Waiting for observed generation 1 May 13 22:07:18.648: INFO: Waiting for all required pods to come up May 13 22:07:18.651: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 13 22:07:30.660: INFO: Waiting for deployment "webserver-deployment" to complete May 13 22:07:30.667: INFO: Updating deployment "webserver-deployment" with a non-existent image May 13 22:07:30.673: INFO: Updating deployment webserver-deployment May 13 22:07:30.673: INFO: Waiting for observed generation 2 May 13 22:07:32.855: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 13 22:07:32.858: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 13 22:07:32.866: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 13 22:07:32.902: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 13 22:07:32.902: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 13 22:07:32.904: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 13 22:07:32.907: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 13 22:07:32.907: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 13 22:07:32.911: INFO: Updating deployment webserver-deployment May 13 22:07:32.911: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 13 22:07:33.411: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 13 22:07:33.483: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 13 22:07:33.980: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5490 /apis/apps/v1/namespaces/deployment-5490/deployments/webserver-deployment 56412094-9406-4c55-91b8-4d10c7560d23 15955700 3 2020-05-13 22:07:16 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004153c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-13 22:07:32 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-13 22:07:33 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 13 22:07:34.162: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5490 /apis/apps/v1/namespaces/deployment-5490/replicasets/webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 15955691 3 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 56412094-9406-4c55-91b8-4d10c7560d23 0xc00430e147 0xc00430e148}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00430e1b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:07:34.162: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 13 22:07:34.162: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5490 /apis/apps/v1/namespaces/deployment-5490/replicasets/webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 15955733 3 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 56412094-9406-4c55-91b8-4d10c7560d23 0xc00430e087 0xc00430e088}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00430e0e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 13 22:07:34.344: INFO: Pod "webserver-deployment-595b5b9587-2vs77" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2vs77 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-2vs77 ea2c3fbf-6e0d-4071-a279-1c0063039b5d 15955599 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d327 0xc00402d328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.150,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c09f33fb98d73728ff499a900b24e5bebaa551f66bbf00f955c4b0099df8c0b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.345: INFO: Pod "webserver-deployment-595b5b9587-4sh8p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4sh8p webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-4sh8p ec9f6ccf-69ed-408d-85d0-2a397f9e9194 15955734 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d4b7 0xc00402d4b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.345: INFO: Pod "webserver-deployment-595b5b9587-65fmj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-65fmj webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-65fmj f7741ce4-1f0e-44b9-a3bf-06f1111f210b 15955724 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d5d7 0xc00402d5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.345: INFO: Pod "webserver-deployment-595b5b9587-6fdtb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6fdtb webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-6fdtb a936a1ad-0021-4669-baed-5cd093e748fa 15955732 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d6f7 0xc00402d6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-13 22:07:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.345: INFO: Pod "webserver-deployment-595b5b9587-6nqp7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6nqp7 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-6nqp7 5e1c5754-86b1-4472-af6c-51f175818f50 15955736 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d877 0xc00402d878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.345: INFO: Pod "webserver-deployment-595b5b9587-bzxt9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bzxt9 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-bzxt9 ae3e1f92-3cee-4054-b721-5cb4f7290a3b 15955726 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402d9b7 0xc00402d9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-cfpvf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cfpvf webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-cfpvf d8b961c2-eed4-439d-8fa8-faa98a4499e1 15955709 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402dad7 0xc00402dad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-fj68n" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fj68n webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-fj68n 92e242fb-5fe4-4397-8709-b84d350aca62 15955568 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402dbf7 0xc00402dbf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.254,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://850b17fb5d7a41fb3fbe1cf19103ecb2072600533f885829c75a08c6ae28e826,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-g22ch" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g22ch webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-g22ch 4551f010-5cfe-4b55-bd6a-20e293f518fd 15955725 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402dd77 0xc00402dd78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-gsp96" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gsp96 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-gsp96 9c543318-1713-4a0a-b181-0fe576027f80 15955591 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc00402de97 0xc00402de98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.3,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c72209869ab9036cbe47e1dbde7d02cba9960637ccc31150d1f67162090628c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-gz8sx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gz8sx webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-gz8sx eede2340-0405-4a89-9080-e890fb8ab105 15955574 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa017 0xc0036fa018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.2,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b6af5b45a4d87d9cc39f70bca031a574379a1cfba74aa6fba27ae3d09e7e504a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.346: INFO: Pod "webserver-deployment-595b5b9587-l8t76" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l8t76 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-l8t76 534c08f3-5b6c-4d24-b685-1f8952ba3163 15955550 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa197 0xc0036fa198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.253,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dee6a12108baeee4532cb10c12550fe55132bf459c3362e930c033effcad37e0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.347: INFO: Pod "webserver-deployment-595b5b9587-x6rt8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x6rt8 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-x6rt8 c8789120-4ba8-4749-91f6-ae0025031d6b 15955582 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa337 0xc0036fa338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.149,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1adaade04019ab48e26cea94a604d09e9faef00e4a0f9f3f9d6ab4c00a8142b7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.347: INFO: Pod "webserver-deployment-595b5b9587-xdvgl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xdvgl webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-xdvgl 69b06b32-fba7-4054-9925-6012c6147576 15955739 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa4b7 0xc0036fa4b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.347: INFO: Pod "webserver-deployment-595b5b9587-xzmn9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzmn9 webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-xzmn9 ab390b71-0f6f-4cb4-a67c-374461f9f06e 15955602 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa5d7 0xc0036fa5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.152,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3dba5b5d3d970bdd40282d2451c33853ac3998c530278d23a7f9a4f02faae5d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.347: INFO: Pod "webserver-deployment-595b5b9587-z82xl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z82xl webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-z82xl 3c1fc2f9-f53f-46f1-905f-15f172c5654e 15955716 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa767 0xc0036fa768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.348: INFO: Pod "webserver-deployment-595b5b9587-z8xpv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z8xpv webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-z8xpv 34f13d0d-d0b4-43cc-8055-4137e4cb5ede 15955740 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa897 0xc0036fa898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.348: INFO: Pod "webserver-deployment-595b5b9587-zlqjm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zlqjm webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-zlqjm 77f9a2e1-8625-4867-a696-098904cd7171 15955738 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fa9c7 0xc0036fa9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.348: INFO: Pod "webserver-deployment-595b5b9587-zmz6p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zmz6p webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-zmz6p f49cd0df-0a6f-4a41-97ef-0521561f26ab 15955708 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036faae7 0xc0036faae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.348: INFO: Pod "webserver-deployment-595b5b9587-zwvrt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zwvrt webserver-deployment-595b5b9587- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-595b5b9587-zwvrt 639c55d7-7912-4335-a08c-5192645595be 15955614 0 2020-05-13 22:07:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 360d81a4-0814-436f-b33e-28a35d1557e6 0xc0036fac47 0xc0036fac48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.4,StartTime:2020-05-13 22:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:07:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://668c306e47a9baf844bd656999b3bb7efeeed65b61827dd77fc5233f0b0ffbd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.348: INFO: Pod "webserver-deployment-c7997dcc8-6bpv2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6bpv2 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-6bpv2 291de12c-930d-40c3-86e3-49ac5628bbfa 15955713 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fadd7 0xc0036fadd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.349: INFO: Pod "webserver-deployment-c7997dcc8-8vcqp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8vcqp webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-8vcqp 45a0177a-2a8d-4651-9231-ea7a7e351535 15955717 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036faf07 0xc0036faf08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.349: INFO: Pod "webserver-deployment-c7997dcc8-b42q6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b42q6 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-b42q6 f31dac3e-638d-41b0-a5e5-464d3137ef68 15955730 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb037 0xc0036fb038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.349: INFO: Pod "webserver-deployment-c7997dcc8-c8csr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c8csr webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-c8csr d4e3d492-96a2-4ff9-8da9-7488cbf97084 15955737 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb167 0xc0036fb168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-13 22:07:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.350: INFO: Pod "webserver-deployment-c7997dcc8-d2rnn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d2rnn webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-d2rnn 96150be1-7639-4983-be08-8081882eebab 15955728 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb2e7 0xc0036fb2e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.350: INFO: Pod "webserver-deployment-c7997dcc8-h9h87" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h9h87 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-h9h87 ac250c90-05ad-4236-a55d-1d2b2ede7716 15955672 0 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb417 0xc0036fb418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-13 22:07:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.350: INFO: Pod "webserver-deployment-c7997dcc8-kwmw8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kwmw8 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-kwmw8 2709d68a-421f-4747-ac7a-b103522e9e51 15955731 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb597 0xc0036fb598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.350: INFO: Pod "webserver-deployment-c7997dcc8-m7d6n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m7d6n webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-m7d6n f7bbbdbb-a6ec-4bd9-aad2-2c5f8e45cdd8 15955682 0 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb6c7 0xc0036fb6c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-13 22:07:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.351: INFO: Pod "webserver-deployment-c7997dcc8-ntsmr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ntsmr webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-ntsmr a77a3c90-e54c-4ad7-bddc-1b72d07f5e4c 15955656 0 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb847 0xc0036fb848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-13 22:07:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.351: INFO: Pod "webserver-deployment-c7997dcc8-pkw8c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pkw8c webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-pkw8c ddd9f7fe-337a-4a22-b21c-b28cf4df6186 15955735 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fb9c7 0xc0036fb9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.351: INFO: Pod "webserver-deployment-c7997dcc8-qzg58" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qzg58 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-qzg58 6d201747-7c6f-49e7-b87c-6cdea7cefc1f 15955661 0 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fbaf7 0xc0036fbaf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-13 22:07:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.351: INFO: Pod "webserver-deployment-c7997dcc8-sp9x2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sp9x2 webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-sp9x2 5dc8df70-8aed-4832-aba9-8b3519e9f879 15955745 0 2020-05-13 22:07:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fbc77 0xc0036fbc78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:34.351: INFO: Pod "webserver-deployment-c7997dcc8-wlxgq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wlxgq webserver-deployment-c7997dcc8- deployment-5490 /api/v1/namespaces/deployment-5490/pods/webserver-deployment-c7997dcc8-wlxgq 7b196d35-c03d-454d-aeda-e5c88554b243 15955677 0 2020-05-13 22:07:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 983e15de-46b2-4ecf-8cb2-310b9aec6747 0xc0036fbda7 0xc0036fbda8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zht8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zht8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zht8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-13 22:07:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:07:34.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5490" for this suite. • [SLOW TEST:17.973 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":182,"skipped":3082,"failed":0} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:07:34.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 13 22:07:34.800: INFO: Waiting up to 5m0s for pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3" in namespace "emptydir-1665" to be "success or failure" May 13 22:07:34.872: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 71.519863ms May 13 22:07:36.968: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167603244s May 13 22:07:39.588: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788081954s May 13 22:07:41.957: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156644602s May 13 22:07:43.970: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.16944539s May 13 22:07:46.055: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.254355104s May 13 22:07:48.254: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.453766198s May 13 22:07:50.399: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.598227708s May 13 22:07:52.459: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.658792348s STEP: Saw pod success May 13 22:07:52.459: INFO: Pod "pod-a392b879-72e6-4b63-924c-81d0cd2dddb3" satisfied condition "success or failure" May 13 22:07:52.471: INFO: Trying to get logs from node jerma-worker2 pod pod-a392b879-72e6-4b63-924c-81d0cd2dddb3 container test-container: STEP: delete the pod May 13 22:07:52.542: INFO: Waiting for pod pod-a392b879-72e6-4b63-924c-81d0cd2dddb3 to disappear May 13 22:07:52.555: INFO: Pod pod-a392b879-72e6-4b63-924c-81d0cd2dddb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:07:52.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1665" for this suite. • [SLOW TEST:18.070 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:07:52.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-70b4c7ae-24be-49bb-9fbe-6164c9f064ab STEP: Creating a pod to test consume secrets May 13 22:07:52.681: INFO: Waiting up to 5m0s for pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb" in namespace "secrets-7788" to be "success or failure" May 13 22:07:52.717: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.043594ms May 13 22:07:54.893: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211808318s May 13 22:07:56.915: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233562569s May 13 22:07:59.100: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Running", Reason="", readiness=true. Elapsed: 6.418835178s May 13 22:08:01.103: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Running", Reason="", readiness=true. Elapsed: 8.421676318s May 13 22:08:03.143: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Running", Reason="", readiness=true. Elapsed: 10.461503871s May 13 22:08:05.154: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Running", Reason="", readiness=true. Elapsed: 12.473053773s May 13 22:08:07.158: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.477167281s STEP: Saw pod success May 13 22:08:07.158: INFO: Pod "pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb" satisfied condition "success or failure" May 13 22:08:07.162: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb container secret-volume-test: STEP: delete the pod May 13 22:08:07.200: INFO: Waiting for pod pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb to disappear May 13 22:08:07.225: INFO: Pod pod-secrets-1e3c1e42-d814-477a-be55-98348fdb07eb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:07.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7788" for this suite. • [SLOW TEST:14.632 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3159,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:07.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:08:07.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d" in namespace "projected-492" to be "success or failure" May 13 22:08:07.311: INFO: Pod "downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.462607ms May 13 22:08:09.315: INFO: Pod "downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019485537s May 13 22:08:11.318: INFO: Pod "downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023004536s STEP: Saw pod success May 13 22:08:11.318: INFO: Pod "downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d" satisfied condition "success or failure" May 13 22:08:11.321: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d container client-container: STEP: delete the pod May 13 22:08:11.378: INFO: Waiting for pod downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d to disappear May 13 22:08:11.389: INFO: Pod downwardapi-volume-13c80e48-456a-4d37-b144-e4f79cebbe7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:11.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-492" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3181,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:11.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 13 22:08:19.684: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:19.708: INFO: Pod pod-with-prestop-http-hook still exists May 13 22:08:21.708: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:21.712: INFO: Pod pod-with-prestop-http-hook still exists May 13 22:08:23.708: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:23.713: INFO: Pod pod-with-prestop-http-hook still exists May 13 22:08:25.708: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:25.713: INFO: Pod pod-with-prestop-http-hook still exists May 13 22:08:27.708: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:27.711: INFO: Pod pod-with-prestop-http-hook still exists May 13 22:08:29.708: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 22:08:29.712: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:29.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-759" for this suite. • [SLOW TEST:18.326 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3184,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:29.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-2e63082d-44d1-40a3-b7ea-97bbd0b2142b STEP: Creating a pod to test consume configMaps May 13 22:08:29.875: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f" in namespace "configmap-9959" to be "success or failure" May 13 22:08:29.909: INFO: Pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.390996ms May 13 22:08:31.914: INFO: Pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038702163s May 13 22:08:34.005: INFO: Pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129700289s May 13 22:08:36.047: INFO: Pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171943089s STEP: Saw pod success May 13 22:08:36.047: INFO: Pod "pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f" satisfied condition "success or failure" May 13 22:08:36.090: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f container configmap-volume-test: STEP: delete the pod May 13 22:08:36.188: INFO: Waiting for pod pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f to disappear May 13 22:08:36.191: INFO: Pod pod-configmaps-9e479cbe-6de1-4707-b3af-f693871fad6f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:36.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9959" for this suite. • [SLOW TEST:6.475 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:36.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:36.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4328" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":188,"skipped":3238,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:36.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-8b0564f2-9bfa-4f2b-8096-72955c732ef0 STEP: Creating a pod to test consume secrets May 13 22:08:36.774: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7" in namespace "projected-9159" to be "success or failure" May 13 22:08:36.785: INFO: Pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.524395ms May 13 22:08:38.789: INFO: Pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015454465s May 13 22:08:40.793: INFO: Pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.019637611s May 13 22:08:42.800: INFO: Pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025803062s STEP: Saw pod success May 13 22:08:42.800: INFO: Pod "pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7" satisfied condition "success or failure" May 13 22:08:42.802: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7 container projected-secret-volume-test: STEP: delete the pod May 13 22:08:42.854: INFO: Waiting for pod pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7 to disappear May 13 22:08:42.914: INFO: Pod pod-projected-secrets-d78051a7-9911-4f21-87fe-dea4321703b7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:42.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9159" for this suite. • [SLOW TEST:6.422 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3240,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:42.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 22:08:42.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5722' May 13 22:08:43.057: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 13 22:08:43.057: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 13 22:08:43.086: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-cwbx8] May 13 22:08:43.086: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-cwbx8" in namespace "kubectl-5722" to be "running and ready" May 13 22:08:43.232: INFO: Pod "e2e-test-httpd-rc-cwbx8": Phase="Pending", Reason="", readiness=false. Elapsed: 145.817887ms May 13 22:08:45.236: INFO: Pod "e2e-test-httpd-rc-cwbx8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149960351s May 13 22:08:47.241: INFO: Pod "e2e-test-httpd-rc-cwbx8": Phase="Running", Reason="", readiness=true. Elapsed: 4.154569675s May 13 22:08:47.241: INFO: Pod "e2e-test-httpd-rc-cwbx8" satisfied condition "running and ready" May 13 22:08:47.241: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-cwbx8] May 13 22:08:47.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5722' May 13 22:08:47.390: INFO: stderr: "" May 13 22:08:47.390: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.172. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.172. Set the 'ServerName' directive globally to suppress this message\n[Wed May 13 22:08:45.634197 2020] [mpm_event:notice] [pid 1:tid 140520171633512] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed May 13 22:08:45.634254 2020] [core:notice] [pid 1:tid 140520171633512] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 13 22:08:47.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5722' May 13 22:08:47.498: INFO: stderr: "" May 13 22:08:47.498: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:47.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5722" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":190,"skipped":3249,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:47.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:08:47.573: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8" in namespace "security-context-test-2376" to be "success or failure" May 13 22:08:47.577: INFO: Pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962666ms May 13 22:08:49.650: INFO: Pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076827435s May 13 22:08:51.654: INFO: Pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081139729s May 13 22:08:51.655: INFO: Pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8" satisfied condition "success or failure" May 13 22:08:51.660: INFO: Got logs for pod "busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:51.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2376" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:51.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c3f46b2e-a9a0-40ee-a506-970bed6a80ef STEP: Creating a pod to test consume configMaps May 13 22:08:52.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e" in namespace "projected-6234" to be "success or failure" May 13 22:08:52.116: INFO: Pod "pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 41.858806ms May 13 22:08:54.120: INFO: Pod "pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045716812s May 13 22:08:56.124: INFO: Pod "pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049806901s STEP: Saw pod success May 13 22:08:56.124: INFO: Pod "pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e" satisfied condition "success or failure" May 13 22:08:56.127: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e container projected-configmap-volume-test: STEP: delete the pod May 13 22:08:56.176: INFO: Waiting for pod pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e to disappear May 13 22:08:56.246: INFO: Pod pod-projected-configmaps-39a3d7ae-379d-46a6-a65d-0767e9d35a4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:08:56.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6234" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3277,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:08:56.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 13 22:08:56.584: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:08:56.666: INFO: Waiting for terminating namespaces to be deleted... May 13 22:08:56.669: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 13 22:08:56.680: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:08:56.680: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:08:56.680: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:08:56.680: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:08:56.680: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 13 22:08:56.687: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 13 22:08:56.687: INFO: Container kube-hunter ready: false, restart count 0 May 13 22:08:56.687: INFO: busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8 from security-context-test-2376 started at 2020-05-13 22:08:47 +0000 UTC (1 container statuses recorded) May 13 22:08:56.687: INFO: Container busybox-privileged-false-ca445eeb-e908-4bab-b170-478776062ca8 ready: false, restart count 0 May 13 22:08:56.687: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:08:56.687: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:08:56.687: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 13 22:08:56.687: INFO: Container kube-bench ready: false, restart count 0 May 13 22:08:56.687: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:08:56.687: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-30094614-297a-4a51-97a2-13a0818479fe 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-30094614-297a-4a51-97a2-13a0818479fe off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-30094614-297a-4a51-97a2-13a0818479fe [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:09:04.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8845" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.631 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":193,"skipped":3298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:09:04.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:09:04.985: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 13 22:09:04.991: INFO: Number of nodes with available pods: 0 May 13 22:09:04.991: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 13 22:09:05.079: INFO: Number of nodes with available pods: 0 May 13 22:09:05.079: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:06.154: INFO: Number of nodes with available pods: 0 May 13 22:09:06.154: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:07.082: INFO: Number of nodes with available pods: 0 May 13 22:09:07.082: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:08.082: INFO: Number of nodes with available pods: 0 May 13 22:09:08.082: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:09.082: INFO: Number of nodes with available pods: 1 May 13 22:09:09.082: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 13 22:09:09.179: INFO: Number of nodes with available pods: 1 May 13 22:09:09.179: INFO: Number of running nodes: 0, number of available pods: 1 May 13 22:09:10.185: INFO: Number of nodes with available pods: 0 May 13 22:09:10.185: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 13 22:09:10.229: INFO: Number of nodes with available pods: 0 May 13 22:09:10.229: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:11.231: INFO: Number of nodes with available pods: 0 May 13 22:09:11.231: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:12.233: INFO: Number of nodes with available pods: 0 May 13 22:09:12.233: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:13.233: INFO: Number of nodes with available pods: 0 May 13 22:09:13.233: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:14.233: INFO: Number of nodes with available pods: 0 May 13 22:09:14.233: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:15.287: INFO: Number of nodes with available pods: 0 May 13 22:09:15.287: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:16.234: INFO: Number of nodes with available pods: 0 May 13 22:09:16.234: INFO: Node jerma-worker is running more than one daemon pod May 13 22:09:17.232: INFO: Number of nodes with available pods: 1 May 13 22:09:17.232: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3949, will wait for the garbage collector to delete the pods May 13 22:09:17.294: INFO: Deleting DaemonSet.extensions daemon-set took: 6.253897ms May 13 22:09:17.594: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.299712ms May 13 22:09:29.297: INFO: Number of nodes with available pods: 0 May 13 22:09:29.297: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:09:29.299: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3949/daemonsets","resourceVersion":"15956695"},"items":null} May 13 22:09:29.302: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3949/pods","resourceVersion":"15956695"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:09:29.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3949" for this suite. • [SLOW TEST:24.458 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":194,"skipped":3321,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:09:29.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 13 22:09:29.426: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:09:29.503: INFO: Waiting for terminating namespaces to be deleted... May 13 22:09:29.506: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 13 22:09:29.511: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:09:29.511: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:09:29.511: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:09:29.511: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:09:29.511: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 13 22:09:29.516: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:09:29.516: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:09:29.516: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 13 22:09:29.516: INFO: Container kube-hunter ready: false, restart count 0 May 13 22:09:29.516: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:09:29.516: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:09:29.516: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 13 22:09:29.516: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0880472b-a6b7-4742-836d-173834d9e36d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-0880472b-a6b7-4742-836d-173834d9e36d off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0880472b-a6b7-4742-836d-173834d9e36d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:14:37.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4719" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.492 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":195,"skipped":3339,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:14:37.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:14:37.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23" in namespace "downward-api-7506" to be "success or failure" May 13 22:14:37.921: INFO: Pod "downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23": Phase="Pending", Reason="", readiness=false. Elapsed: 21.235852ms May 13 22:14:39.939: INFO: Pod "downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039429897s May 13 22:14:41.965: INFO: Pod "downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064857988s STEP: Saw pod success May 13 22:14:41.965: INFO: Pod "downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23" satisfied condition "success or failure" May 13 22:14:41.967: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23 container client-container: STEP: delete the pod May 13 22:14:42.005: INFO: Waiting for pod downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23 to disappear May 13 22:14:42.391: INFO: Pod downwardapi-volume-1240b670-4ef3-46ae-8bce-e8e9dc401f23 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:14:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7506" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3349,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:14:42.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:14:42.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860" in namespace "downward-api-1624" to be "success or failure" May 13 22:14:42.884: INFO: Pod "downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860": Phase="Pending", Reason="", readiness=false. Elapsed: 10.295561ms May 13 22:14:44.887: INFO: Pod "downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012957207s May 13 22:14:46.889: INFO: Pod "downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015347493s STEP: Saw pod success May 13 22:14:46.889: INFO: Pod "downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860" satisfied condition "success or failure" May 13 22:14:46.892: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860 container client-container: STEP: delete the pod May 13 22:14:46.933: INFO: Waiting for pod downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860 to disappear May 13 22:14:46.950: INFO: Pod downwardapi-volume-28c9a018-e2a3-4afb-97f6-e71c10947860 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:14:46.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1624" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:14:46.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 13 22:14:47.502: INFO: Waiting up to 5m0s for pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b" in namespace "emptydir-2960" to be "success or failure" May 13 22:14:47.644: INFO: Pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 141.806644ms May 13 22:14:49.648: INFO: Pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146276136s May 13 22:14:51.652: INFO: Pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b": Phase="Running", Reason="", readiness=true. Elapsed: 4.15049632s May 13 22:14:53.657: INFO: Pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155229463s STEP: Saw pod success May 13 22:14:53.657: INFO: Pod "pod-6f8b7777-25a8-419f-b877-39c744bb8e2b" satisfied condition "success or failure" May 13 22:14:53.660: INFO: Trying to get logs from node jerma-worker2 pod pod-6f8b7777-25a8-419f-b877-39c744bb8e2b container test-container: STEP: delete the pod May 13 22:14:53.696: INFO: Waiting for pod pod-6f8b7777-25a8-419f-b877-39c744bb8e2b to disappear May 13 22:14:53.700: INFO: Pod pod-6f8b7777-25a8-419f-b877-39c744bb8e2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:14:53.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2960" for this suite. • [SLOW TEST:6.751 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:14:53.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8378/secret-test-a92b63a7-c618-405a-b526-6164b5953514 STEP: Creating a pod to test consume secrets May 13 22:14:53.838: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33" in namespace "secrets-8378" to be "success or failure" May 13 22:14:53.869: INFO: Pod "pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33": Phase="Pending", Reason="", readiness=false. Elapsed: 31.402624ms May 13 22:14:55.890: INFO: Pod "pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052378288s May 13 22:14:57.894: INFO: Pod "pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056849881s STEP: Saw pod success May 13 22:14:57.895: INFO: Pod "pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33" satisfied condition "success or failure" May 13 22:14:57.897: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33 container env-test: STEP: delete the pod May 13 22:14:57.924: INFO: Waiting for pod pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33 to disappear May 13 22:14:57.928: INFO: Pod pod-configmaps-6ce7ae4b-0e1e-4418-8030-30b7d9927d33 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:14:57.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8378" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3402,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:14:57.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:14:58.043: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9846" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3417,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:02.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 13 22:15:02.202: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:17.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6584" for this suite. • [SLOW TEST:15.721 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":201,"skipped":3438,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:17.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 13 22:15:17.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5249' May 13 22:15:21.221: INFO: stderr: "" May 13 22:15:21.221: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 13 22:15:22.226: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:22.226: INFO: Found 0 / 1 May 13 22:15:23.228: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:23.228: INFO: Found 0 / 1 May 13 22:15:24.225: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:24.225: INFO: Found 0 / 1 May 13 22:15:25.292: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:25.292: INFO: Found 1 / 1 May 13 22:15:25.292: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 13 22:15:25.296: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:25.296: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 22:15:25.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-pr6ks --namespace=kubectl-5249 -p {"metadata":{"annotations":{"x":"y"}}}' May 13 22:15:25.393: INFO: stderr: "" May 13 22:15:25.393: INFO: stdout: "pod/agnhost-master-pr6ks patched\n" STEP: checking annotations May 13 22:15:25.429: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:15:25.429: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:25.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5249" for this suite. • [SLOW TEST:7.603 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":202,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:25.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-574 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 22:15:25.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 13 22:15:49.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.181:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-574 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:15:49.881: INFO: >>> kubeConfig: /root/.kube/config I0513 22:15:49.910105 6 log.go:172] (0xc002bea370) (0xc002693860) Create stream I0513 22:15:49.910137 6 log.go:172] (0xc002bea370) (0xc002693860) Stream added, broadcasting: 1 I0513 22:15:49.911426 6 log.go:172] (0xc002bea370) Reply frame received for 1 I0513 22:15:49.911458 6 log.go:172] (0xc002bea370) (0xc0019c25a0) Create stream I0513 22:15:49.911472 6 log.go:172] (0xc002bea370) (0xc0019c25a0) Stream added, broadcasting: 3 I0513 22:15:49.912210 6 log.go:172] (0xc002bea370) Reply frame received for 3 I0513 22:15:49.912241 6 log.go:172] (0xc002bea370) (0xc002234d20) Create stream I0513 22:15:49.912248 6 log.go:172] (0xc002bea370) (0xc002234d20) Stream added, broadcasting: 5 I0513 22:15:49.913523 6 log.go:172] (0xc002bea370) Reply frame received for 5 I0513 22:15:50.002565 6 log.go:172] (0xc002bea370) Data frame received for 5 I0513 22:15:50.002589 6 log.go:172] (0xc002234d20) (5) Data frame handling I0513 22:15:50.002604 6 log.go:172] (0xc002bea370) Data frame received for 3 I0513 22:15:50.002616 6 log.go:172] (0xc0019c25a0) (3) Data frame handling I0513 22:15:50.002632 6 log.go:172] (0xc0019c25a0) (3) Data frame sent I0513 22:15:50.002639 6 log.go:172] (0xc002bea370) Data frame received for 3 I0513 22:15:50.002644 6 log.go:172] (0xc0019c25a0) (3) Data frame handling I0513 22:15:50.003947 6 log.go:172] (0xc002bea370) Data frame received for 1 I0513 22:15:50.003967 6 log.go:172] (0xc002693860) (1) Data frame handling I0513 22:15:50.003993 6 log.go:172] (0xc002693860) (1) Data frame sent I0513 22:15:50.004008 6 log.go:172] (0xc002bea370) (0xc002693860) Stream removed, broadcasting: 1 I0513 22:15:50.004022 6 log.go:172] (0xc002bea370) Go away received I0513 22:15:50.004160 6 log.go:172] (0xc002bea370) (0xc002693860) Stream removed, broadcasting: 1 I0513 22:15:50.004185 6 log.go:172] (0xc002bea370) (0xc0019c25a0) Stream removed, broadcasting: 3 I0513 22:15:50.004195 6 log.go:172] (0xc002bea370) (0xc002234d20) Stream removed, broadcasting: 5 May 13 22:15:50.004: INFO: Found all expected endpoints: [netserver-0] May 13 22:15:50.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-574 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:15:50.006: INFO: >>> kubeConfig: /root/.kube/config I0513 22:15:50.042738 6 log.go:172] (0xc002bea8f0) (0xc002693f40) Create stream I0513 22:15:50.042761 6 log.go:172] (0xc002bea8f0) (0xc002693f40) Stream added, broadcasting: 1 I0513 22:15:50.044124 6 log.go:172] (0xc002bea8f0) Reply frame received for 1 I0513 22:15:50.044160 6 log.go:172] (0xc002bea8f0) (0xc001dfa0a0) Create stream I0513 22:15:50.044178 6 log.go:172] (0xc002bea8f0) (0xc001dfa0a0) Stream added, broadcasting: 3 I0513 22:15:50.044899 6 log.go:172] (0xc002bea8f0) Reply frame received for 3 I0513 22:15:50.044931 6 log.go:172] (0xc002bea8f0) (0xc002235040) Create stream I0513 22:15:50.044943 6 log.go:172] (0xc002bea8f0) (0xc002235040) Stream added, broadcasting: 5 I0513 22:15:50.045774 6 log.go:172] (0xc002bea8f0) Reply frame received for 5 I0513 22:15:50.112126 6 log.go:172] (0xc002bea8f0) Data frame received for 5 I0513 22:15:50.112157 6 log.go:172] (0xc002235040) (5) Data frame handling I0513 22:15:50.112175 6 log.go:172] (0xc002bea8f0) Data frame received for 3 I0513 22:15:50.112187 6 log.go:172] (0xc001dfa0a0) (3) Data frame handling I0513 22:15:50.112199 6 log.go:172] (0xc001dfa0a0) (3) Data frame sent I0513 22:15:50.112210 6 log.go:172] (0xc002bea8f0) Data frame received for 3 I0513 22:15:50.112220 6 log.go:172] (0xc001dfa0a0) (3) Data frame handling I0513 22:15:50.113382 6 log.go:172] (0xc002bea8f0) Data frame received for 1 I0513 22:15:50.113404 6 log.go:172] (0xc002693f40) (1) Data frame handling I0513 22:15:50.113416 6 log.go:172] (0xc002693f40) (1) Data frame sent I0513 22:15:50.113431 6 log.go:172] (0xc002bea8f0) (0xc002693f40) Stream removed, broadcasting: 1 I0513 22:15:50.113503 6 log.go:172] (0xc002bea8f0) (0xc002693f40) Stream removed, broadcasting: 1 I0513 22:15:50.113522 6 log.go:172] (0xc002bea8f0) (0xc001dfa0a0) Stream removed, broadcasting: 3 I0513 22:15:50.113587 6 log.go:172] (0xc002bea8f0) Go away received I0513 22:15:50.113629 6 log.go:172] (0xc002bea8f0) (0xc002235040) Stream removed, broadcasting: 5 May 13 22:15:50.113: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:50.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-574" for this suite. • [SLOW TEST:24.681 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3487,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:50.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:50.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3568" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":204,"skipped":3494,"failed":0} SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:50.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c0ae9185-4ec7-46be-96af-aa70a507d6ef STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c0ae9185-4ec7-46be-96af-aa70a507d6ef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:15:56.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6721" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3496,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:15:57.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6611/configmap-test-326f0130-699e-4a0f-aa2b-dd1c1d4678da STEP: Creating a pod to test consume configMaps May 13 22:15:57.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e" in namespace "configmap-6611" to be "success or failure" May 13 22:15:57.240: INFO: Pod "pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061412ms May 13 22:15:59.268: INFO: Pod "pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029575089s May 13 22:16:01.270: INFO: Pod "pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031947976s STEP: Saw pod success May 13 22:16:01.270: INFO: Pod "pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e" satisfied condition "success or failure" May 13 22:16:01.272: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e container env-test: STEP: delete the pod May 13 22:16:01.442: INFO: Waiting for pod pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e to disappear May 13 22:16:01.450: INFO: Pod pod-configmaps-ee639120-7020-4c31-9cdd-3c98c171b31e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:16:01.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6611" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3499,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:16:01.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b8bd2f8e-0159-4ce7-8358-fe90d9ae2849 STEP: Creating a pod to test consume secrets May 13 22:16:01.529: INFO: Waiting up to 5m0s for pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba" in namespace "secrets-6651" to be "success or failure" May 13 22:16:01.579: INFO: Pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba": Phase="Pending", Reason="", readiness=false. Elapsed: 49.782199ms May 13 22:16:03.591: INFO: Pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062030228s May 13 22:16:05.596: INFO: Pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066802152s May 13 22:16:07.600: INFO: Pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070674527s STEP: Saw pod success May 13 22:16:07.600: INFO: Pod "pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba" satisfied condition "success or failure" May 13 22:16:07.603: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba container secret-volume-test: STEP: delete the pod May 13 22:16:07.669: INFO: Waiting for pod pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba to disappear May 13 22:16:07.672: INFO: Pod pod-secrets-4cfbcdd6-cd09-4447-8257-b806b55545ba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:16:07.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6651" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3500,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:16:07.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 13 22:16:07.839: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3110" to be "success or failure" May 13 22:16:08.622: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 782.91627ms May 13 22:16:10.626: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.786765697s May 13 22:16:12.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.790850119s May 13 22:16:14.634: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.794766639s STEP: Saw pod success May 13 22:16:14.634: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 13 22:16:14.637: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 13 22:16:14.657: INFO: Waiting for pod pod-host-path-test to disappear May 13 22:16:14.660: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:16:14.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3110" for this suite. • [SLOW TEST:6.962 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3502,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:16:14.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:16:14.770: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:16:18.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8844" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3523,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:16:18.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-eb348eb8-a898-4325-96a7-7f7b1428734c in namespace container-probe-1435 May 13 22:16:22.995: INFO: Started pod busybox-eb348eb8-a898-4325-96a7-7f7b1428734c in namespace container-probe-1435 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:16:22.999: INFO: Initial restart count of pod busybox-eb348eb8-a898-4325-96a7-7f7b1428734c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:20:23.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1435" for this suite. • [SLOW TEST:245.065 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3523,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:20:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-96fe8d36-d5b7-4c63-96f0-ebfb14562ed0 STEP: Creating a pod to test consume secrets May 13 22:20:24.110: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3" in namespace "projected-6194" to be "success or failure" May 13 22:20:24.128: INFO: Pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.470856ms May 13 22:20:26.248: INFO: Pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137514286s May 13 22:20:28.252: INFO: Pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.141353689s May 13 22:20:30.255: INFO: Pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144972125s STEP: Saw pod success May 13 22:20:30.255: INFO: Pod "pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3" satisfied condition "success or failure" May 13 22:20:30.258: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3 container projected-secret-volume-test: STEP: delete the pod May 13 22:20:30.402: INFO: Waiting for pod pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3 to disappear May 13 22:20:30.456: INFO: Pod pod-projected-secrets-4627579d-aea6-4905-8749-e1f4cbe427a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:20:30.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6194" for this suite. • [SLOW TEST:6.583 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3524,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:20:30.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:20:30.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c" in namespace "projected-2889" to be "success or failure" May 13 22:20:30.640: INFO: Pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548481ms May 13 22:20:32.645: INFO: Pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009376574s May 13 22:20:34.651: INFO: Pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c": Phase="Running", Reason="", readiness=true. Elapsed: 4.014602085s May 13 22:20:36.655: INFO: Pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018712113s STEP: Saw pod success May 13 22:20:36.655: INFO: Pod "downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c" satisfied condition "success or failure" May 13 22:20:36.657: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c container client-container: STEP: delete the pod May 13 22:20:36.686: INFO: Waiting for pod downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c to disappear May 13 22:20:36.690: INFO: Pod downwardapi-volume-f6f46510-116a-4a52-8ce6-15feb338b34c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:20:36.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2889" for this suite. • [SLOW TEST:6.124 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3525,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:20:36.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:20:36.836: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1f679529-372d-430b-b701-3c4a2f8fef87" in namespace "security-context-test-4565" to be "success or failure" May 13 22:20:36.840: INFO: Pod "busybox-user-65534-1f679529-372d-430b-b701-3c4a2f8fef87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.474104ms May 13 22:20:38.844: INFO: Pod "busybox-user-65534-1f679529-372d-430b-b701-3c4a2f8fef87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007914778s May 13 22:20:40.990: INFO: Pod "busybox-user-65534-1f679529-372d-430b-b701-3c4a2f8fef87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153713981s May 13 22:20:40.990: INFO: Pod "busybox-user-65534-1f679529-372d-430b-b701-3c4a2f8fef87" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:20:40.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4565" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3526,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:20:40.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 13 22:20:45.707: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c548e3ba-806d-4944-88aa-713138231a97" May 13 22:20:45.707: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c548e3ba-806d-4944-88aa-713138231a97" in namespace "pods-3516" to be "terminated due to deadline exceeded" May 13 22:20:45.726: INFO: Pod "pod-update-activedeadlineseconds-c548e3ba-806d-4944-88aa-713138231a97": Phase="Running", Reason="", readiness=true. Elapsed: 18.170236ms May 13 22:20:47.729: INFO: Pod "pod-update-activedeadlineseconds-c548e3ba-806d-4944-88aa-713138231a97": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021741594s May 13 22:20:47.729: INFO: Pod "pod-update-activedeadlineseconds-c548e3ba-806d-4944-88aa-713138231a97" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:20:47.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3516" for this suite. • [SLOW TEST:6.740 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:20:47.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:21:07.864: INFO: Container started at 2020-05-13 22:20:50 +0000 UTC, pod became ready at 2020-05-13 22:21:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:21:07.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6889" for this suite. • [SLOW TEST:20.135 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3559,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:21:07.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 13 22:21:07.941: INFO: namespace kubectl-6365 May 13 22:21:07.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6365' May 13 22:21:08.310: INFO: stderr: "" May 13 22:21:08.311: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 13 22:21:09.315: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:21:09.315: INFO: Found 0 / 1 May 13 22:21:10.350: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:21:10.350: INFO: Found 0 / 1 May 13 22:21:11.316: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:21:11.316: INFO: Found 0 / 1 May 13 22:21:12.344: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:21:12.344: INFO: Found 1 / 1 May 13 22:21:12.344: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 13 22:21:12.348: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:21:12.348: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 22:21:12.348: INFO: wait on agnhost-master startup in kubectl-6365 May 13 22:21:12.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-7p4s6 agnhost-master --namespace=kubectl-6365' May 13 22:21:12.452: INFO: stderr: "" May 13 22:21:12.452: INFO: stdout: "Paused\n" STEP: exposing RC May 13 22:21:12.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6365' May 13 22:21:12.593: INFO: stderr: "" May 13 22:21:12.593: INFO: stdout: "service/rm2 exposed\n" May 13 22:21:12.600: INFO: Service rm2 in namespace kubectl-6365 found. STEP: exposing service May 13 22:21:14.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6365' May 13 22:21:14.770: INFO: stderr: "" May 13 22:21:14.770: INFO: stdout: "service/rm3 exposed\n" May 13 22:21:14.835: INFO: Service rm3 in namespace kubectl-6365 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:21:16.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6365" for this suite. • [SLOW TEST:9.040 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":216,"skipped":3570,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:21:16.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:21:16.938: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3107 I0513 22:21:16.963823 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3107, replica count: 1 I0513 22:21:18.014213 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:21:19.014400 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:21:20.015034 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:21:21.019808 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:21:21.187: INFO: Created: latency-svc-gpp2c May 13 22:21:21.202: INFO: Got endpoints: latency-svc-gpp2c [82.078209ms] May 13 22:21:21.241: INFO: Created: latency-svc-jt57s May 13 22:21:21.256: INFO: Got endpoints: latency-svc-jt57s [53.991214ms] May 13 22:21:21.327: INFO: Created: latency-svc-jcslv May 13 22:21:21.331: INFO: Got endpoints: latency-svc-jcslv [128.88775ms] May 13 22:21:21.366: INFO: Created: latency-svc-8p7fw May 13 22:21:21.382: INFO: Got endpoints: latency-svc-8p7fw [180.011319ms] May 13 22:21:21.402: INFO: Created: latency-svc-cqqfm May 13 22:21:21.412: INFO: Got endpoints: latency-svc-cqqfm [210.676282ms] May 13 22:21:21.500: INFO: Created: latency-svc-cvkvb May 13 22:21:21.505: INFO: Got endpoints: latency-svc-cvkvb [303.171487ms] May 13 22:21:21.540: INFO: Created: latency-svc-nlkg4 May 13 22:21:21.557: INFO: Got endpoints: latency-svc-nlkg4 [355.416221ms] May 13 22:21:21.582: INFO: Created: latency-svc-kbxc5 May 13 22:21:21.638: INFO: Got endpoints: latency-svc-kbxc5 [436.171167ms] May 13 22:21:21.648: INFO: Created: latency-svc-g6qzp May 13 22:21:21.659: INFO: Got endpoints: latency-svc-g6qzp [456.825448ms] May 13 22:21:21.679: INFO: Created: latency-svc-4xdvz May 13 22:21:21.696: INFO: Got endpoints: latency-svc-4xdvz [493.840824ms] May 13 22:21:21.715: INFO: Created: latency-svc-vn4hg May 13 22:21:21.732: INFO: Got endpoints: latency-svc-vn4hg [530.334697ms] May 13 22:21:21.785: INFO: Created: latency-svc-tkkbn May 13 22:21:21.789: INFO: Got endpoints: latency-svc-tkkbn [587.081762ms] May 13 22:21:21.829: INFO: Created: latency-svc-f2qdq May 13 22:21:21.847: INFO: Got endpoints: latency-svc-f2qdq [645.234156ms] May 13 22:21:21.943: INFO: Created: latency-svc-mxmmg May 13 22:21:21.955: INFO: Got endpoints: latency-svc-mxmmg [752.689032ms] May 13 22:21:21.990: INFO: Created: latency-svc-hh66s May 13 22:21:22.008: INFO: Got endpoints: latency-svc-hh66s [806.708111ms] May 13 22:21:22.099: INFO: Created: latency-svc-ksp2z May 13 22:21:22.105: INFO: Got endpoints: latency-svc-ksp2z [902.639893ms] May 13 22:21:22.141: INFO: Created: latency-svc-l7md9 May 13 22:21:22.160: INFO: Got endpoints: latency-svc-l7md9 [903.876552ms] May 13 22:21:22.195: INFO: Created: latency-svc-gns8f May 13 22:21:22.266: INFO: Got endpoints: latency-svc-gns8f [934.943322ms] May 13 22:21:22.314: INFO: Created: latency-svc-pwxz9 May 13 22:21:22.328: INFO: Got endpoints: latency-svc-pwxz9 [946.055669ms] May 13 22:21:22.392: INFO: Created: latency-svc-9vhmd May 13 22:21:22.423: INFO: Created: latency-svc-zsj4j May 13 22:21:22.423: INFO: Got endpoints: latency-svc-9vhmd [1.010886887s] May 13 22:21:22.461: INFO: Got endpoints: latency-svc-zsj4j [955.46791ms] May 13 22:21:22.554: INFO: Created: latency-svc-zt6r6 May 13 22:21:22.557: INFO: Got endpoints: latency-svc-zt6r6 [999.845676ms] May 13 22:21:22.609: INFO: Created: latency-svc-cvpqw May 13 22:21:22.624: INFO: Got endpoints: latency-svc-cvpqw [986.0719ms] May 13 22:21:22.651: INFO: Created: latency-svc-xscq2 May 13 22:21:22.703: INFO: Got endpoints: latency-svc-xscq2 [1.044472225s] May 13 22:21:22.715: INFO: Created: latency-svc-xtcjd May 13 22:21:22.733: INFO: Got endpoints: latency-svc-xtcjd [1.03656138s] May 13 22:21:22.758: INFO: Created: latency-svc-pxgpf May 13 22:21:22.793: INFO: Got endpoints: latency-svc-pxgpf [1.06084765s] May 13 22:21:22.853: INFO: Created: latency-svc-qxfql May 13 22:21:22.872: INFO: Got endpoints: latency-svc-qxfql [1.083529733s] May 13 22:21:22.931: INFO: Created: latency-svc-4pdvf May 13 22:21:23.003: INFO: Got endpoints: latency-svc-4pdvf [1.155680736s] May 13 22:21:23.095: INFO: Created: latency-svc-gmnsg May 13 22:21:23.165: INFO: Got endpoints: latency-svc-gmnsg [1.209649127s] May 13 22:21:23.196: INFO: Created: latency-svc-nd2sq May 13 22:21:23.214: INFO: Got endpoints: latency-svc-nd2sq [1.205796183s] May 13 22:21:23.306: INFO: Created: latency-svc-jvs2q May 13 22:21:23.336: INFO: Got endpoints: latency-svc-jvs2q [1.231027151s] May 13 22:21:23.382: INFO: Created: latency-svc-nk4mz May 13 22:21:23.428: INFO: Got endpoints: latency-svc-nk4mz [1.268368005s] May 13 22:21:23.460: INFO: Created: latency-svc-jjbq5 May 13 22:21:23.478: INFO: Got endpoints: latency-svc-jjbq5 [1.212344804s] May 13 22:21:23.508: INFO: Created: latency-svc-2stm6 May 13 22:21:23.527: INFO: Got endpoints: latency-svc-2stm6 [1.198405956s] May 13 22:21:23.572: INFO: Created: latency-svc-pdmn8 May 13 22:21:23.575: INFO: Got endpoints: latency-svc-pdmn8 [1.151827214s] May 13 22:21:23.628: INFO: Created: latency-svc-m6c6x May 13 22:21:23.658: INFO: Got endpoints: latency-svc-m6c6x [1.197118128s] May 13 22:21:23.710: INFO: Created: latency-svc-khhc4 May 13 22:21:23.714: INFO: Got endpoints: latency-svc-khhc4 [1.156948572s] May 13 22:21:23.743: INFO: Created: latency-svc-xmjxr May 13 22:21:23.756: INFO: Got endpoints: latency-svc-xmjxr [1.131968032s] May 13 22:21:23.791: INFO: Created: latency-svc-snk2h May 13 22:21:23.805: INFO: Got endpoints: latency-svc-snk2h [1.101339822s] May 13 22:21:23.869: INFO: Created: latency-svc-wn7rb May 13 22:21:23.877: INFO: Got endpoints: latency-svc-wn7rb [1.144317334s] May 13 22:21:23.952: INFO: Created: latency-svc-bvpxv May 13 22:21:24.003: INFO: Got endpoints: latency-svc-bvpxv [1.209319106s] May 13 22:21:24.030: INFO: Created: latency-svc-wtkzj May 13 22:21:24.040: INFO: Got endpoints: latency-svc-wtkzj [1.167279675s] May 13 22:21:24.083: INFO: Created: latency-svc-mqphj May 13 22:21:24.153: INFO: Got endpoints: latency-svc-mqphj [1.149734174s] May 13 22:21:24.199: INFO: Created: latency-svc-z9552 May 13 22:21:24.214: INFO: Got endpoints: latency-svc-z9552 [1.049819814s] May 13 22:21:24.246: INFO: Created: latency-svc-b8wcj May 13 22:21:24.284: INFO: Got endpoints: latency-svc-b8wcj [1.069874373s] May 13 22:21:24.306: INFO: Created: latency-svc-sr74x May 13 22:21:24.318: INFO: Got endpoints: latency-svc-sr74x [981.658304ms] May 13 22:21:24.347: INFO: Created: latency-svc-bp2ct May 13 22:21:24.360: INFO: Got endpoints: latency-svc-bp2ct [931.471431ms] May 13 22:21:24.458: INFO: Created: latency-svc-k8czb May 13 22:21:24.511: INFO: Got endpoints: latency-svc-k8czb [1.032400416s] May 13 22:21:24.787: INFO: Created: latency-svc-4wvhx May 13 22:21:24.876: INFO: Got endpoints: latency-svc-4wvhx [1.349356575s] May 13 22:21:24.942: INFO: Created: latency-svc-8tlqz May 13 22:21:25.020: INFO: Got endpoints: latency-svc-8tlqz [1.444882736s] May 13 22:21:25.219: INFO: Created: latency-svc-dvqn7 May 13 22:21:25.375: INFO: Got endpoints: latency-svc-dvqn7 [1.717246751s] May 13 22:21:25.375: INFO: Created: latency-svc-4nzvz May 13 22:21:25.381: INFO: Got endpoints: latency-svc-4nzvz [1.666192681s] May 13 22:21:25.422: INFO: Created: latency-svc-sg47d May 13 22:21:25.429: INFO: Got endpoints: latency-svc-sg47d [1.672770799s] May 13 22:21:25.459: INFO: Created: latency-svc-4xz2g May 13 22:21:25.465: INFO: Got endpoints: latency-svc-4xz2g [1.660497629s] May 13 22:21:25.508: INFO: Created: latency-svc-z9f78 May 13 22:21:25.514: INFO: Got endpoints: latency-svc-z9f78 [1.63654313s] May 13 22:21:25.543: INFO: Created: latency-svc-tqm4w May 13 22:21:25.573: INFO: Created: latency-svc-c22tx May 13 22:21:25.573: INFO: Got endpoints: latency-svc-tqm4w [1.570364797s] May 13 22:21:25.586: INFO: Got endpoints: latency-svc-c22tx [1.546615423s] May 13 22:21:25.674: INFO: Created: latency-svc-6pjdl May 13 22:21:25.698: INFO: Got endpoints: latency-svc-6pjdl [1.544676282s] May 13 22:21:25.727: INFO: Created: latency-svc-5xb94 May 13 22:21:25.738: INFO: Got endpoints: latency-svc-5xb94 [1.523259462s] May 13 22:21:25.836: INFO: Created: latency-svc-fx2dn May 13 22:21:25.840: INFO: Got endpoints: latency-svc-fx2dn [1.55522346s] May 13 22:21:25.933: INFO: Created: latency-svc-bjpct May 13 22:21:25.991: INFO: Got endpoints: latency-svc-bjpct [1.673374988s] May 13 22:21:25.993: INFO: Created: latency-svc-8jzwn May 13 22:21:26.002: INFO: Got endpoints: latency-svc-8jzwn [1.642610113s] May 13 22:21:26.022: INFO: Created: latency-svc-2wh86 May 13 22:21:26.040: INFO: Got endpoints: latency-svc-2wh86 [1.52858862s] May 13 22:21:26.077: INFO: Created: latency-svc-8lnk6 May 13 22:21:26.159: INFO: Got endpoints: latency-svc-8lnk6 [1.282782241s] May 13 22:21:26.172: INFO: Created: latency-svc-vbcl9 May 13 22:21:26.190: INFO: Got endpoints: latency-svc-vbcl9 [1.16975028s] May 13 22:21:26.215: INFO: Created: latency-svc-kf7bq May 13 22:21:26.238: INFO: Got endpoints: latency-svc-kf7bq [863.189077ms] May 13 22:21:26.315: INFO: Created: latency-svc-59j9p May 13 22:21:26.319: INFO: Got endpoints: latency-svc-59j9p [937.975633ms] May 13 22:21:26.363: INFO: Created: latency-svc-zzqh2 May 13 22:21:26.377: INFO: Got endpoints: latency-svc-zzqh2 [947.952969ms] May 13 22:21:26.407: INFO: Created: latency-svc-2xph7 May 13 22:21:26.476: INFO: Got endpoints: latency-svc-2xph7 [1.010188858s] May 13 22:21:26.508: INFO: Created: latency-svc-ml8zg May 13 22:21:26.528: INFO: Got endpoints: latency-svc-ml8zg [1.014053866s] May 13 22:21:26.614: INFO: Created: latency-svc-blvnc May 13 22:21:26.642: INFO: Got endpoints: latency-svc-blvnc [1.068780603s] May 13 22:21:26.670: INFO: Created: latency-svc-pnfjs May 13 22:21:26.690: INFO: Got endpoints: latency-svc-pnfjs [1.103652474s] May 13 22:21:26.770: INFO: Created: latency-svc-t9t66 May 13 22:21:26.809: INFO: Got endpoints: latency-svc-t9t66 [1.110840907s] May 13 22:21:26.809: INFO: Created: latency-svc-l6vrk May 13 22:21:26.865: INFO: Got endpoints: latency-svc-l6vrk [1.12714499s] May 13 22:21:26.975: INFO: Created: latency-svc-zpmfw May 13 22:21:26.985: INFO: Got endpoints: latency-svc-zpmfw [1.145799276s] May 13 22:21:27.018: INFO: Created: latency-svc-z9l8b May 13 22:21:27.075: INFO: Got endpoints: latency-svc-z9l8b [1.083671107s] May 13 22:21:27.115: INFO: Created: latency-svc-z7h2p May 13 22:21:27.230: INFO: Got endpoints: latency-svc-z7h2p [1.227986691s] May 13 22:21:27.232: INFO: Created: latency-svc-5zn7z May 13 22:21:27.251: INFO: Got endpoints: latency-svc-5zn7z [1.211276725s] May 13 22:21:27.289: INFO: Created: latency-svc-rrrq2 May 13 22:21:27.298: INFO: Got endpoints: latency-svc-rrrq2 [1.13939692s] May 13 22:21:27.317: INFO: Created: latency-svc-sxqwv May 13 22:21:27.323: INFO: Got endpoints: latency-svc-sxqwv [1.132678316s] May 13 22:21:27.362: INFO: Created: latency-svc-44vt8 May 13 22:21:27.372: INFO: Got endpoints: latency-svc-44vt8 [1.133889638s] May 13 22:21:27.415: INFO: Created: latency-svc-qr5kf May 13 22:21:27.432: INFO: Got endpoints: latency-svc-qr5kf [1.113193881s] May 13 22:21:27.555: INFO: Created: latency-svc-jf7dd May 13 22:21:27.558: INFO: Got endpoints: latency-svc-jf7dd [1.18089796s] May 13 22:21:27.600: INFO: Created: latency-svc-vf59k May 13 22:21:27.624: INFO: Got endpoints: latency-svc-vf59k [1.148320531s] May 13 22:21:27.704: INFO: Created: latency-svc-wsm9f May 13 22:21:27.714: INFO: Got endpoints: latency-svc-wsm9f [1.186534907s] May 13 22:21:27.750: INFO: Created: latency-svc-c5l9j May 13 22:21:27.763: INFO: Got endpoints: latency-svc-c5l9j [1.121025805s] May 13 22:21:27.798: INFO: Created: latency-svc-bh4ll May 13 22:21:27.835: INFO: Got endpoints: latency-svc-bh4ll [1.145176403s] May 13 22:21:27.859: INFO: Created: latency-svc-z2l7f May 13 22:21:27.873: INFO: Got endpoints: latency-svc-z2l7f [1.064403609s] May 13 22:21:27.901: INFO: Created: latency-svc-8rwjs May 13 22:21:27.914: INFO: Got endpoints: latency-svc-8rwjs [1.049184287s] May 13 22:21:27.995: INFO: Created: latency-svc-rp4fb May 13 22:21:28.011: INFO: Got endpoints: latency-svc-rp4fb [1.025880104s] May 13 22:21:28.044: INFO: Created: latency-svc-rtpwk May 13 22:21:28.060: INFO: Got endpoints: latency-svc-rtpwk [985.252807ms] May 13 22:21:28.087: INFO: Created: latency-svc-nhv66 May 13 22:21:28.134: INFO: Got endpoints: latency-svc-nhv66 [903.945751ms] May 13 22:21:28.147: INFO: Created: latency-svc-6cv26 May 13 22:21:28.163: INFO: Got endpoints: latency-svc-6cv26 [911.943622ms] May 13 22:21:28.199: INFO: Created: latency-svc-sd2fr May 13 22:21:28.223: INFO: Got endpoints: latency-svc-sd2fr [924.705223ms] May 13 22:21:28.305: INFO: Created: latency-svc-gkf9n May 13 22:21:28.332: INFO: Got endpoints: latency-svc-gkf9n [1.009316058s] May 13 22:21:28.362: INFO: Created: latency-svc-jx247 May 13 22:21:28.379: INFO: Got endpoints: latency-svc-jx247 [1.006944932s] May 13 22:21:28.399: INFO: Created: latency-svc-jp87r May 13 22:21:28.446: INFO: Got endpoints: latency-svc-jp87r [1.014306009s] May 13 22:21:28.463: INFO: Created: latency-svc-bfnm7 May 13 22:21:28.482: INFO: Got endpoints: latency-svc-bfnm7 [924.266219ms] May 13 22:21:28.524: INFO: Created: latency-svc-s5t8t May 13 22:21:28.578: INFO: Got endpoints: latency-svc-s5t8t [953.932713ms] May 13 22:21:28.602: INFO: Created: latency-svc-5pngv May 13 22:21:28.609: INFO: Got endpoints: latency-svc-5pngv [894.582463ms] May 13 22:21:28.668: INFO: Created: latency-svc-rz2wg May 13 22:21:28.715: INFO: Got endpoints: latency-svc-rz2wg [952.441153ms] May 13 22:21:28.721: INFO: Created: latency-svc-85qrv May 13 22:21:28.736: INFO: Got endpoints: latency-svc-85qrv [901.048498ms] May 13 22:21:28.757: INFO: Created: latency-svc-gt48d May 13 22:21:28.766: INFO: Got endpoints: latency-svc-gt48d [892.834705ms] May 13 22:21:28.795: INFO: Created: latency-svc-kxtbq May 13 22:21:28.809: INFO: Got endpoints: latency-svc-kxtbq [894.360555ms] May 13 22:21:28.877: INFO: Created: latency-svc-kcfsr May 13 22:21:28.905: INFO: Got endpoints: latency-svc-kcfsr [893.730119ms] May 13 22:21:28.964: INFO: Created: latency-svc-nbk7v May 13 22:21:29.057: INFO: Got endpoints: latency-svc-nbk7v [997.060653ms] May 13 22:21:29.059: INFO: Created: latency-svc-6jmj8 May 13 22:21:29.098: INFO: Got endpoints: latency-svc-6jmj8 [963.100429ms] May 13 22:21:29.267: INFO: Created: latency-svc-ql47v May 13 22:21:29.315: INFO: Got endpoints: latency-svc-ql47v [1.151765639s] May 13 22:21:29.352: INFO: Created: latency-svc-r2tlp May 13 22:21:29.470: INFO: Got endpoints: latency-svc-r2tlp [1.246946658s] May 13 22:21:29.484: INFO: Created: latency-svc-dxr4b May 13 22:21:29.550: INFO: Got endpoints: latency-svc-dxr4b [1.218009487s] May 13 22:21:29.629: INFO: Created: latency-svc-tstzm May 13 22:21:29.644: INFO: Got endpoints: latency-svc-tstzm [1.265102592s] May 13 22:21:29.671: INFO: Created: latency-svc-6nrjl May 13 22:21:29.687: INFO: Got endpoints: latency-svc-6nrjl [1.240407812s] May 13 22:21:29.812: INFO: Created: latency-svc-964wm May 13 22:21:29.820: INFO: Got endpoints: latency-svc-964wm [1.337701331s] May 13 22:21:29.876: INFO: Created: latency-svc-qjqvp May 13 22:21:29.892: INFO: Got endpoints: latency-svc-qjqvp [1.313504478s] May 13 22:21:29.973: INFO: Created: latency-svc-vjpf5 May 13 22:21:29.977: INFO: Got endpoints: latency-svc-vjpf5 [1.368103213s] May 13 22:21:30.030: INFO: Created: latency-svc-gjmwx May 13 22:21:30.072: INFO: Got endpoints: latency-svc-gjmwx [1.35682207s] May 13 22:21:30.123: INFO: Created: latency-svc-ks5vn May 13 22:21:30.132: INFO: Got endpoints: latency-svc-ks5vn [1.395907648s] May 13 22:21:30.168: INFO: Created: latency-svc-dsdrz May 13 22:21:30.199: INFO: Got endpoints: latency-svc-dsdrz [1.432570616s] May 13 22:21:30.272: INFO: Created: latency-svc-fzsq8 May 13 22:21:30.275: INFO: Got endpoints: latency-svc-fzsq8 [1.466294284s] May 13 22:21:30.312: INFO: Created: latency-svc-n7pzj May 13 22:21:30.332: INFO: Got endpoints: latency-svc-n7pzj [1.426550506s] May 13 22:21:30.354: INFO: Created: latency-svc-7sghm May 13 22:21:30.405: INFO: Got endpoints: latency-svc-7sghm [1.347725524s] May 13 22:21:30.424: INFO: Created: latency-svc-b65fn May 13 22:21:30.462: INFO: Got endpoints: latency-svc-b65fn [1.364223341s] May 13 22:21:30.560: INFO: Created: latency-svc-zntq2 May 13 22:21:30.563: INFO: Got endpoints: latency-svc-zntq2 [1.248147664s] May 13 22:21:30.594: INFO: Created: latency-svc-ntpt6 May 13 22:21:30.627: INFO: Got endpoints: latency-svc-ntpt6 [1.156739854s] May 13 22:21:30.648: INFO: Created: latency-svc-8k4nj May 13 22:21:30.658: INFO: Got endpoints: latency-svc-8k4nj [1.107507672s] May 13 22:21:30.701: INFO: Created: latency-svc-pj6cj May 13 22:21:30.718: INFO: Got endpoints: latency-svc-pj6cj [1.073952785s] May 13 22:21:30.744: INFO: Created: latency-svc-jpbcz May 13 22:21:30.760: INFO: Got endpoints: latency-svc-jpbcz [1.073126441s] May 13 22:21:30.780: INFO: Created: latency-svc-8x48t May 13 22:21:30.847: INFO: Got endpoints: latency-svc-8x48t [1.026901748s] May 13 22:21:30.857: INFO: Created: latency-svc-cmmd5 May 13 22:21:30.899: INFO: Got endpoints: latency-svc-cmmd5 [1.007331083s] May 13 22:21:30.942: INFO: Created: latency-svc-vddp9 May 13 22:21:30.991: INFO: Got endpoints: latency-svc-vddp9 [1.013930192s] May 13 22:21:31.003: INFO: Created: latency-svc-tsxgd May 13 22:21:31.021: INFO: Got endpoints: latency-svc-tsxgd [948.813555ms] May 13 22:21:31.055: INFO: Created: latency-svc-8r2tz May 13 22:21:31.074: INFO: Got endpoints: latency-svc-8r2tz [941.543068ms] May 13 22:21:31.135: INFO: Created: latency-svc-4kgkj May 13 22:21:31.138: INFO: Got endpoints: latency-svc-4kgkj [938.913211ms] May 13 22:21:31.170: INFO: Created: latency-svc-jcmjw May 13 22:21:31.189: INFO: Got endpoints: latency-svc-jcmjw [913.818384ms] May 13 22:21:31.215: INFO: Created: latency-svc-jndxp May 13 22:21:31.231: INFO: Got endpoints: latency-svc-jndxp [899.033707ms] May 13 22:21:31.289: INFO: Created: latency-svc-dqrdf May 13 22:21:31.303: INFO: Got endpoints: latency-svc-dqrdf [898.132385ms] May 13 22:21:31.325: INFO: Created: latency-svc-jlx62 May 13 22:21:31.346: INFO: Got endpoints: latency-svc-jlx62 [884.211488ms] May 13 22:21:31.431: INFO: Created: latency-svc-x5cgn May 13 22:21:31.452: INFO: Got endpoints: latency-svc-x5cgn [888.895995ms] May 13 22:21:31.475: INFO: Created: latency-svc-4r4lw May 13 22:21:31.515: INFO: Got endpoints: latency-svc-4r4lw [887.674835ms] May 13 22:21:31.578: INFO: Created: latency-svc-bqvkp May 13 22:21:31.581: INFO: Got endpoints: latency-svc-bqvkp [923.55544ms] May 13 22:21:31.614: INFO: Created: latency-svc-ht8lk May 13 22:21:31.629: INFO: Got endpoints: latency-svc-ht8lk [911.028742ms] May 13 22:21:31.655: INFO: Created: latency-svc-srtnz May 13 22:21:31.672: INFO: Got endpoints: latency-svc-srtnz [912.490649ms] May 13 22:21:31.734: INFO: Created: latency-svc-8bnvm May 13 22:21:31.736: INFO: Got endpoints: latency-svc-8bnvm [889.035379ms] May 13 22:21:31.771: INFO: Created: latency-svc-s584l May 13 22:21:31.786: INFO: Got endpoints: latency-svc-s584l [887.240512ms] May 13 22:21:31.818: INFO: Created: latency-svc-5qm7r May 13 22:21:31.871: INFO: Got endpoints: latency-svc-5qm7r [879.828793ms] May 13 22:21:31.890: INFO: Created: latency-svc-bdkzd May 13 22:21:31.907: INFO: Got endpoints: latency-svc-bdkzd [886.183253ms] May 13 22:21:31.931: INFO: Created: latency-svc-qbx8k May 13 22:21:31.951: INFO: Got endpoints: latency-svc-qbx8k [877.175918ms] May 13 22:21:32.063: INFO: Created: latency-svc-mm9nn May 13 22:21:32.066: INFO: Got endpoints: latency-svc-mm9nn [928.188509ms] May 13 22:21:32.123: INFO: Created: latency-svc-gw8zm May 13 22:21:32.142: INFO: Got endpoints: latency-svc-gw8zm [953.46565ms] May 13 22:21:32.206: INFO: Created: latency-svc-hdv76 May 13 22:21:32.239: INFO: Got endpoints: latency-svc-hdv76 [1.008196211s] May 13 22:21:32.273: INFO: Created: latency-svc-xt8gs May 13 22:21:32.299: INFO: Got endpoints: latency-svc-xt8gs [995.752635ms] May 13 22:21:32.420: INFO: Created: latency-svc-wln8r May 13 22:21:32.425: INFO: Got endpoints: latency-svc-wln8r [1.079088467s] May 13 22:21:32.485: INFO: Created: latency-svc-t88w9 May 13 22:21:32.632: INFO: Got endpoints: latency-svc-t88w9 [1.1798391s] May 13 22:21:32.658: INFO: Created: latency-svc-tmt5t May 13 22:21:32.684: INFO: Got endpoints: latency-svc-tmt5t [1.169041731s] May 13 22:21:32.823: INFO: Created: latency-svc-r85z7 May 13 22:21:32.841: INFO: Got endpoints: latency-svc-r85z7 [1.25973561s] May 13 22:21:32.886: INFO: Created: latency-svc-p8lrw May 13 22:21:32.900: INFO: Got endpoints: latency-svc-p8lrw [1.270976967s] May 13 22:21:33.029: INFO: Created: latency-svc-gc79n May 13 22:21:33.051: INFO: Got endpoints: latency-svc-gc79n [1.37868582s] May 13 22:21:33.103: INFO: Created: latency-svc-n4xlb May 13 22:21:33.219: INFO: Got endpoints: latency-svc-n4xlb [1.482474968s] May 13 22:21:33.221: INFO: Created: latency-svc-pxjdv May 13 22:21:33.238: INFO: Got endpoints: latency-svc-pxjdv [1.451657519s] May 13 22:21:33.283: INFO: Created: latency-svc-672b6 May 13 22:21:33.309: INFO: Got endpoints: latency-svc-672b6 [1.438621089s] May 13 22:21:33.447: INFO: Created: latency-svc-t5k72 May 13 22:21:33.499: INFO: Got endpoints: latency-svc-t5k72 [1.591733159s] May 13 22:21:33.649: INFO: Created: latency-svc-x88z9 May 13 22:21:33.650: INFO: Got endpoints: latency-svc-x88z9 [1.698738012s] May 13 22:21:33.708: INFO: Created: latency-svc-c4hwp May 13 22:21:33.781: INFO: Got endpoints: latency-svc-c4hwp [1.715338629s] May 13 22:21:33.793: INFO: Created: latency-svc-xh452 May 13 22:21:33.809: INFO: Got endpoints: latency-svc-xh452 [1.666795334s] May 13 22:21:33.849: INFO: Created: latency-svc-8jttq May 13 22:21:33.870: INFO: Got endpoints: latency-svc-8jttq [1.631018563s] May 13 22:21:33.944: INFO: Created: latency-svc-jwhph May 13 22:21:33.967: INFO: Got endpoints: latency-svc-jwhph [1.667776652s] May 13 22:21:33.969: INFO: Created: latency-svc-5fk6r May 13 22:21:33.984: INFO: Got endpoints: latency-svc-5fk6r [1.558940622s] May 13 22:21:34.021: INFO: Created: latency-svc-2z4m2 May 13 22:21:34.040: INFO: Got endpoints: latency-svc-2z4m2 [1.407863836s] May 13 22:21:34.087: INFO: Created: latency-svc-6ghs9 May 13 22:21:34.106: INFO: Got endpoints: latency-svc-6ghs9 [1.421757012s] May 13 22:21:34.146: INFO: Created: latency-svc-zx6qq May 13 22:21:34.267: INFO: Got endpoints: latency-svc-zx6qq [1.425495249s] May 13 22:21:34.268: INFO: Created: latency-svc-nk2cs May 13 22:21:34.286: INFO: Got endpoints: latency-svc-nk2cs [1.385107223s] May 13 22:21:34.333: INFO: Created: latency-svc-n6kpc May 13 22:21:34.346: INFO: Got endpoints: latency-svc-n6kpc [1.29506391s] May 13 22:21:34.411: INFO: Created: latency-svc-sbtk9 May 13 22:21:34.415: INFO: Got endpoints: latency-svc-sbtk9 [1.196484199s] May 13 22:21:34.447: INFO: Created: latency-svc-lxgct May 13 22:21:34.461: INFO: Got endpoints: latency-svc-lxgct [1.223314349s] May 13 22:21:34.489: INFO: Created: latency-svc-bqvjl May 13 22:21:34.503: INFO: Got endpoints: latency-svc-bqvjl [1.193657044s] May 13 22:21:34.578: INFO: Created: latency-svc-tjgvz May 13 22:21:34.590: INFO: Got endpoints: latency-svc-tjgvz [1.090773243s] May 13 22:21:34.627: INFO: Created: latency-svc-zfj2g May 13 22:21:34.656: INFO: Got endpoints: latency-svc-zfj2g [1.005922036s] May 13 22:21:34.722: INFO: Created: latency-svc-rznzz May 13 22:21:34.725: INFO: Got endpoints: latency-svc-rznzz [943.333607ms] May 13 22:21:34.753: INFO: Created: latency-svc-bkmg2 May 13 22:21:34.763: INFO: Got endpoints: latency-svc-bkmg2 [953.724606ms] May 13 22:21:34.790: INFO: Created: latency-svc-6tkcv May 13 22:21:34.801: INFO: Got endpoints: latency-svc-6tkcv [931.032464ms] May 13 22:21:34.890: INFO: Created: latency-svc-kg5kn May 13 22:21:34.926: INFO: Got endpoints: latency-svc-kg5kn [958.920436ms] May 13 22:21:35.003: INFO: Created: latency-svc-9hvxl May 13 22:21:35.007: INFO: Got endpoints: latency-svc-9hvxl [1.022680097s] May 13 22:21:35.041: INFO: Created: latency-svc-nsm8g May 13 22:21:35.052: INFO: Got endpoints: latency-svc-nsm8g [1.012653772s] May 13 22:21:35.141: INFO: Created: latency-svc-qq94g May 13 22:21:35.161: INFO: Got endpoints: latency-svc-qq94g [1.054959314s] May 13 22:21:35.203: INFO: Created: latency-svc-vt9fl May 13 22:21:35.227: INFO: Got endpoints: latency-svc-vt9fl [960.571331ms] May 13 22:21:35.298: INFO: Created: latency-svc-c9gh4 May 13 22:21:35.311: INFO: Got endpoints: latency-svc-c9gh4 [1.025750975s] May 13 22:21:35.346: INFO: Created: latency-svc-prmw6 May 13 22:21:35.367: INFO: Got endpoints: latency-svc-prmw6 [1.020379519s] May 13 22:21:35.420: INFO: Created: latency-svc-pwphq May 13 22:21:35.451: INFO: Got endpoints: latency-svc-pwphq [1.035303193s] May 13 22:21:35.497: INFO: Created: latency-svc-j7xpl May 13 22:21:35.548: INFO: Got endpoints: latency-svc-j7xpl [1.086526218s] May 13 22:21:35.610: INFO: Created: latency-svc-7rdfk May 13 22:21:35.637: INFO: Got endpoints: latency-svc-7rdfk [1.134227782s] May 13 22:21:35.737: INFO: Created: latency-svc-jx6dc May 13 22:21:35.786: INFO: Got endpoints: latency-svc-jx6dc [1.196038045s] May 13 22:21:35.861: INFO: Created: latency-svc-6g5qj May 13 22:21:35.877: INFO: Got endpoints: latency-svc-6g5qj [1.221179287s] May 13 22:21:35.911: INFO: Created: latency-svc-bnqlh May 13 22:21:35.926: INFO: Got endpoints: latency-svc-bnqlh [1.200835355s] May 13 22:21:36.003: INFO: Created: latency-svc-brlw2 May 13 22:21:36.006: INFO: Got endpoints: latency-svc-brlw2 [1.243537279s] May 13 22:21:36.042: INFO: Created: latency-svc-7wkf8 May 13 22:21:36.058: INFO: Got endpoints: latency-svc-7wkf8 [1.257249423s] May 13 22:21:36.090: INFO: Created: latency-svc-nfscd May 13 22:21:36.135: INFO: Got endpoints: latency-svc-nfscd [1.209139609s] May 13 22:21:36.161: INFO: Created: latency-svc-fx2lr May 13 22:21:36.179: INFO: Got endpoints: latency-svc-fx2lr [1.172220806s] May 13 22:21:36.204: INFO: Created: latency-svc-56t7g May 13 22:21:36.222: INFO: Got endpoints: latency-svc-56t7g [1.169560561s] May 13 22:21:36.305: INFO: Created: latency-svc-nrt5s May 13 22:21:36.342: INFO: Got endpoints: latency-svc-nrt5s [1.180824378s] May 13 22:21:36.384: INFO: Created: latency-svc-s6cnh May 13 22:21:36.422: INFO: Got endpoints: latency-svc-s6cnh [1.194867563s] May 13 22:21:36.468: INFO: Created: latency-svc-59kvq May 13 22:21:36.480: INFO: Got endpoints: latency-svc-59kvq [1.16899081s] May 13 22:21:36.480: INFO: Latencies: [53.991214ms 128.88775ms 180.011319ms 210.676282ms 303.171487ms 355.416221ms 436.171167ms 456.825448ms 493.840824ms 530.334697ms 587.081762ms 645.234156ms 752.689032ms 806.708111ms 863.189077ms 877.175918ms 879.828793ms 884.211488ms 886.183253ms 887.240512ms 887.674835ms 888.895995ms 889.035379ms 892.834705ms 893.730119ms 894.360555ms 894.582463ms 898.132385ms 899.033707ms 901.048498ms 902.639893ms 903.876552ms 903.945751ms 911.028742ms 911.943622ms 912.490649ms 913.818384ms 923.55544ms 924.266219ms 924.705223ms 928.188509ms 931.032464ms 931.471431ms 934.943322ms 937.975633ms 938.913211ms 941.543068ms 943.333607ms 946.055669ms 947.952969ms 948.813555ms 952.441153ms 953.46565ms 953.724606ms 953.932713ms 955.46791ms 958.920436ms 960.571331ms 963.100429ms 981.658304ms 985.252807ms 986.0719ms 995.752635ms 997.060653ms 999.845676ms 1.005922036s 1.006944932s 1.007331083s 1.008196211s 1.009316058s 1.010188858s 1.010886887s 1.012653772s 1.013930192s 1.014053866s 1.014306009s 1.020379519s 1.022680097s 1.025750975s 1.025880104s 1.026901748s 1.032400416s 1.035303193s 1.03656138s 1.044472225s 1.049184287s 1.049819814s 1.054959314s 1.06084765s 1.064403609s 1.068780603s 1.069874373s 1.073126441s 1.073952785s 1.079088467s 1.083529733s 1.083671107s 1.086526218s 1.090773243s 1.101339822s 1.103652474s 1.107507672s 1.110840907s 1.113193881s 1.121025805s 1.12714499s 1.131968032s 1.132678316s 1.133889638s 1.134227782s 1.13939692s 1.144317334s 1.145176403s 1.145799276s 1.148320531s 1.149734174s 1.151765639s 1.151827214s 1.155680736s 1.156739854s 1.156948572s 1.167279675s 1.16899081s 1.169041731s 1.169560561s 1.16975028s 1.172220806s 1.1798391s 1.180824378s 1.18089796s 1.186534907s 1.193657044s 1.194867563s 1.196038045s 1.196484199s 1.197118128s 1.198405956s 1.200835355s 1.205796183s 1.209139609s 1.209319106s 1.209649127s 1.211276725s 1.212344804s 1.218009487s 1.221179287s 1.223314349s 1.227986691s 1.231027151s 1.240407812s 1.243537279s 1.246946658s 1.248147664s 1.257249423s 1.25973561s 1.265102592s 1.268368005s 1.270976967s 1.282782241s 1.29506391s 1.313504478s 1.337701331s 1.347725524s 1.349356575s 1.35682207s 1.364223341s 1.368103213s 1.37868582s 1.385107223s 1.395907648s 1.407863836s 1.421757012s 1.425495249s 1.426550506s 1.432570616s 1.438621089s 1.444882736s 1.451657519s 1.466294284s 1.482474968s 1.523259462s 1.52858862s 1.544676282s 1.546615423s 1.55522346s 1.558940622s 1.570364797s 1.591733159s 1.631018563s 1.63654313s 1.642610113s 1.660497629s 1.666192681s 1.666795334s 1.667776652s 1.672770799s 1.673374988s 1.698738012s 1.715338629s 1.717246751s] May 13 22:21:36.481: INFO: 50 %ile: 1.103652474s May 13 22:21:36.481: INFO: 90 %ile: 1.523259462s May 13 22:21:36.481: INFO: 99 %ile: 1.715338629s May 13 22:21:36.481: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:21:36.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3107" for this suite. • [SLOW TEST:19.580 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":217,"skipped":3575,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:21:36.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5186.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5186.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.172.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.172.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.172.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.172.27_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5186.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5186.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.172.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.172.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.172.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.172.27_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:21:44.787: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.793: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.859: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.955: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:44.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:45.021: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:45.062: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:21:50.105: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.109: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.118: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.121: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.304: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.394: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:50.564: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:21:55.078: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.098: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.127: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.269: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.375: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.378: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:21:55.645: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:22:00.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.444: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.485: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:00.768: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:22:05.106: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.108: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.154: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.381: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.384: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.387: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.390: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:05.407: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:22:10.067: INFO: Unable to read wheezy_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.073: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.095: INFO: Unable to read jessie_udp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.097: INFO: Unable to read jessie_tcp@dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.100: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local from pod dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb: the server could not find the requested resource (get pods dns-test-a500f74c-e101-4760-b491-509a98c25afb) May 13 22:22:10.118: INFO: Lookups using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb failed for: [wheezy_udp@dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@dns-test-service.dns-5186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_udp@dns-test-service.dns-5186.svc.cluster.local jessie_tcp@dns-test-service.dns-5186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5186.svc.cluster.local] May 13 22:22:15.140: INFO: DNS probes using dns-5186/dns-test-a500f74c-e101-4760-b491-509a98c25afb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:22:15.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5186" for this suite. • [SLOW TEST:38.918 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":218,"skipped":3576,"failed":0} [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:22:15.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 22:22:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3882' May 13 22:22:15.920: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 13 22:22:15.920: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 13 22:22:15.969: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 13 22:22:16.146: INFO: scanned /root for discovery docs: May 13 22:22:16.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3882' May 13 22:22:32.106: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 13 22:22:32.106: INFO: stdout: "Created e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c\nScaling up e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 13 22:22:32.106: INFO: stdout: "Created e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c\nScaling up e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 13 22:22:32.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3882' May 13 22:22:32.198: INFO: stderr: "" May 13 22:22:32.198: INFO: stdout: "e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c-954mf " May 13 22:22:32.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c-954mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3882' May 13 22:22:32.291: INFO: stderr: "" May 13 22:22:32.291: INFO: stdout: "true" May 13 22:22:32.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c-954mf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3882' May 13 22:22:32.383: INFO: stderr: "" May 13 22:22:32.383: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 13 22:22:32.383: INFO: e2e-test-httpd-rc-3706f98e29bb3443d363fc9e95437d0c-954mf is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 13 22:22:32.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3882' May 13 22:22:32.512: INFO: stderr: "" May 13 22:22:32.512: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:22:32.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3882" for this suite. • [SLOW TEST:17.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":219,"skipped":3576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:22:32.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:22:39.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9942" for this suite. STEP: Destroying namespace "nsdeletetest-9847" for this suite. May 13 22:22:39.692: INFO: Namespace nsdeletetest-9847 was already deleted STEP: Destroying namespace "nsdeletetest-4066" for this suite. • [SLOW TEST:7.177 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":220,"skipped":3601,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:22:39.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 13 22:22:48.010: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:48.036: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:22:50.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:50.041: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:22:52.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:52.040: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:22:54.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:54.041: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:22:56.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:56.040: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:22:58.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:22:58.040: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:23:00.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:23:00.041: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:23:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-940" for this suite. • [SLOW TEST:20.351 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3604,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:23:00.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7868 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7868 May 13 22:23:00.119: INFO: Found 0 stateful pods, waiting for 1 May 13 22:23:10.123: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 22:23:10.139: INFO: Deleting all statefulset in ns statefulset-7868 May 13 22:23:10.146: INFO: Scaling statefulset ss to 0 May 13 22:23:30.210: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:23:30.214: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:23:30.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7868" for this suite. • [SLOW TEST:30.181 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":222,"skipped":3609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:23:30.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478 May 13 22:23:30.322: INFO: Pod name my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478: Found 0 pods out of 1 May 13 22:23:35.325: INFO: Pod name my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478: Found 1 pods out of 1 May 13 22:23:35.325: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478" are running May 13 22:23:35.335: INFO: Pod "my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478-srnhx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:23:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:23:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:23:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:23:30 +0000 UTC Reason: Message:}]) May 13 22:23:35.335: INFO: Trying to dial the pod May 13 22:23:40.346: INFO: Controller my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478: Got expected result from replica 1 [my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478-srnhx]: "my-hostname-basic-ee7afaa7-51df-4aaa-b52a-626bd8721478-srnhx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:23:40.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-821" for this suite. • [SLOW TEST:10.121 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":223,"skipped":3636,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:23:40.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:23:40.499: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 12.948968ms) May 13 22:23:40.504: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.634393ms) May 13 22:23:40.508: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.886675ms) May 13 22:23:40.511: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.270567ms) May 13 22:23:40.514: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.22457ms) May 13 22:23:40.518: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.714094ms) May 13 22:23:40.522: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.507038ms) May 13 22:23:40.545: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 22.93843ms) May 13 22:23:40.548: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.175173ms) May 13 22:23:40.552: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.255711ms) May 13 22:23:40.556: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.565628ms) May 13 22:23:40.559: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.997061ms) May 13 22:23:40.562: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.093057ms) May 13 22:23:40.565: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.233239ms) May 13 22:23:40.568: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.075372ms) May 13 22:23:40.571: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.938122ms) May 13 22:23:40.575: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.319848ms) May 13 22:23:40.578: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.960344ms) May 13 22:23:40.580: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.701933ms) May 13 22:23:40.584: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.113801ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:23:40.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3350" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":224,"skipped":3637,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:23:40.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-0b85efa8-d474-459b-ae81-a1a790d11b4d STEP: Creating secret with name s-test-opt-upd-8f5e2fbd-9b70-41ce-ade9-46fe16353b1a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0b85efa8-d474-459b-ae81-a1a790d11b4d STEP: Updating secret s-test-opt-upd-8f5e2fbd-9b70-41ce-ade9-46fe16353b1a STEP: Creating secret with name s-test-opt-create-bd431f5a-a403-4100-8599-42f2b9fdac94 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:25:19.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6162" for this suite. • [SLOW TEST:98.625 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3651,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:25:19.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 13 22:25:19.308: INFO: Waiting up to 5m0s for pod "pod-e1525d0c-3eca-41eb-999e-f22c2571e276" in namespace "emptydir-8303" to be "success or failure" May 13 22:25:19.314: INFO: Pod "pod-e1525d0c-3eca-41eb-999e-f22c2571e276": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43975ms May 13 22:25:21.320: INFO: Pod "pod-e1525d0c-3eca-41eb-999e-f22c2571e276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012057134s May 13 22:25:23.324: INFO: Pod "pod-e1525d0c-3eca-41eb-999e-f22c2571e276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016423126s STEP: Saw pod success May 13 22:25:23.324: INFO: Pod "pod-e1525d0c-3eca-41eb-999e-f22c2571e276" satisfied condition "success or failure" May 13 22:25:23.327: INFO: Trying to get logs from node jerma-worker2 pod pod-e1525d0c-3eca-41eb-999e-f22c2571e276 container test-container: STEP: delete the pod May 13 22:25:23.416: INFO: Waiting for pod pod-e1525d0c-3eca-41eb-999e-f22c2571e276 to disappear May 13 22:25:23.422: INFO: Pod pod-e1525d0c-3eca-41eb-999e-f22c2571e276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:25:23.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8303" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3654,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:25:23.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:25:23.505: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8" in namespace "security-context-test-6861" to be "success or failure" May 13 22:25:23.523: INFO: Pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.285152ms May 13 22:25:25.611: INFO: Pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106101826s May 13 22:25:27.650: INFO: Pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8": Phase="Running", Reason="", readiness=true. Elapsed: 4.145343942s May 13 22:25:29.655: INFO: Pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150256533s May 13 22:25:29.655: INFO: Pod "busybox-readonly-false-7979468a-7cc5-4e48-b5cf-e9db89a9fdd8" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:25:29.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6861" for this suite. • [SLOW TEST:6.236 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3671,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:25:29.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 13 22:25:37.871: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:37.877: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:39.877: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:39.880: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:41.877: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:41.882: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:43.877: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:43.899: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:45.878: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:45.881: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:47.877: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:47.882: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:25:49.877: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:25:49.880: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:25:49.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1205" for this suite. • [SLOW TEST:20.221 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:25:49.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 13 22:25:49.985: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:25:50.003: INFO: Waiting for terminating namespaces to be deleted... May 13 22:25:50.005: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 13 22:25:50.009: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:25:50.009: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:25:50.009: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:25:50.009: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:25:50.009: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 13 22:25:50.014: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:25:50.014: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:25:50.014: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 13 22:25:50.014: INFO: Container kube-bench ready: false, restart count 0 May 13 22:25:50.014: INFO: pod-handle-http-request from container-lifecycle-hook-1205 started at 2020-05-13 22:25:29 +0000 UTC (1 container statuses recorded) May 13 22:25:50.014: INFO: Container pod-handle-http-request ready: true, restart count 0 May 13 22:25:50.014: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:25:50.014: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:25:50.014: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 13 22:25:50.014: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 13 22:25:50.124: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker2 May 13 22:25:50.124: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 13 22:25:50.124: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 13 22:25:50.124: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 13 22:25:50.124: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 13 22:25:50.124: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 13 22:25:50.129: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5.160eb6d654f13057], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2670/filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5.160eb6d6a81f8372], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5.160eb6d73be72494], Reason = [Created], Message = [Created container filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5] STEP: Considering event: Type = [Normal], Name = [filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5.160eb6d75a217ff1], Reason = [Started], Message = [Started container filler-pod-a6a7f5ba-0db3-44a4-bdfe-ef08593692c5] STEP: Considering event: Type = [Normal], Name = [filler-pod-b434efa7-1147-439d-a5b5-5c991a738009.160eb6d6541dde44], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2670/filler-pod-b434efa7-1147-439d-a5b5-5c991a738009 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-b434efa7-1147-439d-a5b5-5c991a738009.160eb6d6e2096f8a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b434efa7-1147-439d-a5b5-5c991a738009.160eb6d7692a73fe], Reason = [Created], Message = [Created container filler-pod-b434efa7-1147-439d-a5b5-5c991a738009] STEP: Considering event: Type = [Normal], Name = [filler-pod-b434efa7-1147-439d-a5b5-5c991a738009.160eb6d77e4a692f], Reason = [Started], Message = [Started container filler-pod-b434efa7-1147-439d-a5b5-5c991a738009] STEP: Considering event: Type = [Warning], Name = [additional-pod.160eb6d7bcc1fcca], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:25:57.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2670" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.421 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":229,"skipped":3752,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:25:57.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-70fed144-8949-486e-85bb-13da1afe7eef STEP: Creating a pod to test consume secrets May 13 22:25:57.390: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f" in namespace "projected-558" to be "success or failure" May 13 22:25:57.394: INFO: Pod "pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082001ms May 13 22:25:59.399: INFO: Pod "pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008678894s May 13 22:26:01.403: INFO: Pod "pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012905942s STEP: Saw pod success May 13 22:26:01.403: INFO: Pod "pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f" satisfied condition "success or failure" May 13 22:26:01.407: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f container secret-volume-test: STEP: delete the pod May 13 22:26:01.512: INFO: Waiting for pod pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f to disappear May 13 22:26:01.561: INFO: Pod pod-projected-secrets-32894b8c-a226-4c46-a11d-c3a6ebb60e5f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:26:01.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-558" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3753,"failed":0} SS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:26:01.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:26:19.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3225" for this suite. • [SLOW TEST:18.161 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":231,"skipped":3755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:26:19.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-z7h6j in namespace proxy-1243 I0513 22:26:19.953456 6 runners.go:189] Created replication controller with name: proxy-service-z7h6j, namespace: proxy-1243, replica count: 1 I0513 22:26:21.003750 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:26:22.003982 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:26:23.004190 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:26:24.004386 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:25.004585 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:26.004808 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:27.005050 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:28.005282 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:29.005514 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:30.005737 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:31.005985 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0513 22:26:32.006183 6 runners.go:189] proxy-service-z7h6j Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:26:32.010: INFO: setup took 12.151080124s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 13 22:26:32.015: INFO: (0) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.768948ms) May 13 22:26:32.018: INFO: (0) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 7.257497ms) May 13 22:26:32.018: INFO: (0) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 7.637183ms) May 13 22:26:32.018: INFO: (0) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 7.89092ms) May 13 22:26:32.018: INFO: (0) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 8.089485ms) May 13 22:26:32.018: INFO: (0) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 8.020078ms) May 13 22:26:32.022: INFO: (0) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 11.991501ms) May 13 22:26:32.022: INFO: (0) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 11.872639ms) May 13 22:26:32.023: INFO: (0) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 12.186193ms) May 13 22:26:32.023: INFO: (0) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 12.387176ms) May 13 22:26:32.023: INFO: (0) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 12.886318ms) May 13 22:26:32.027: INFO: (0) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 17.0447ms) May 13 22:26:32.027: INFO: (0) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 7.122818ms) May 13 22:26:32.053: INFO: (1) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 7.015795ms) May 13 22:26:32.053: INFO: (1) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 7.064199ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 7.338279ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 7.324389ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 7.393361ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 7.315606ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 7.345913ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 7.406104ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 7.494608ms) May 13 22:26:32.054: INFO: (1) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 7.340519ms) May 13 22:26:32.056: INFO: (2) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 1.905575ms) May 13 22:26:32.059: INFO: (2) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.850205ms) May 13 22:26:32.059: INFO: (2) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.958095ms) May 13 22:26:32.059: INFO: (2) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 5.279434ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.546697ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.569596ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 5.480784ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 5.710422ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 5.691964ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 5.888986ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 6.000253ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 6.307699ms) May 13 22:26:32.060: INFO: (2) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 6.615805ms) May 13 22:26:32.064: INFO: (3) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 2.837935ms) May 13 22:26:32.065: INFO: (3) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.320308ms) May 13 22:26:32.065: INFO: (3) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.816796ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 5.049122ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 4.886196ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 5.121194ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 4.919075ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.226782ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 5.207742ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.405243ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 5.346943ms) May 13 22:26:32.066: INFO: (3) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 4.215261ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.325029ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.356367ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.332295ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.376765ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.450275ms) May 13 22:26:32.071: INFO: (4) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 5.339469ms) May 13 22:26:32.073: INFO: (4) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 6.535859ms) May 13 22:26:32.078: INFO: (5) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.820004ms) May 13 22:26:32.078: INFO: (5) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.799062ms) May 13 22:26:32.078: INFO: (5) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.938878ms) May 13 22:26:32.078: INFO: (5) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.915749ms) May 13 22:26:32.078: INFO: (5) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 5.046088ms) May 13 22:26:32.080: INFO: (5) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 6.385542ms) May 13 22:26:32.080: INFO: (5) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 6.553731ms) May 13 22:26:32.080: INFO: (5) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 6.84763ms) May 13 22:26:32.080: INFO: (5) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 6.807125ms) May 13 22:26:32.082: INFO: (5) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 8.261602ms) May 13 22:26:32.082: INFO: (5) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 8.580996ms) May 13 22:26:32.082: INFO: (5) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 8.594289ms) May 13 22:26:32.082: INFO: (5) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 8.598178ms) May 13 22:26:32.082: INFO: (5) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 8.630661ms) May 13 22:26:32.086: INFO: (6) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.249858ms) May 13 22:26:32.086: INFO: (6) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 4.319212ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 4.420079ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.470044ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.460309ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.508546ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.572724ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.740324ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 4.781741ms) May 13 22:26:32.087: INFO: (6) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 3.47362ms) May 13 22:26:32.093: INFO: (7) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 3.537451ms) May 13 22:26:32.093: INFO: (7) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 3.505215ms) May 13 22:26:32.093: INFO: (7) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 4.163718ms) May 13 22:26:32.093: INFO: (7) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 4.362562ms) May 13 22:26:32.094: INFO: (7) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.689387ms) May 13 22:26:32.094: INFO: (7) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test (200; 4.960857ms) May 13 22:26:32.100: INFO: (8) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 4.970833ms) May 13 22:26:32.100: INFO: (8) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 4.959992ms) May 13 22:26:32.100: INFO: (8) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 4.980469ms) May 13 22:26:32.102: INFO: (9) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 2.02577ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.078631ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.533415ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 4.577467ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.602793ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.641495ms) May 13 22:26:32.104: INFO: (9) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 4.607849ms) May 13 22:26:32.105: INFO: (9) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.691305ms) May 13 22:26:32.105: INFO: (9) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 4.218117ms) May 13 22:26:32.110: INFO: (10) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 4.307699ms) May 13 22:26:32.110: INFO: (10) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.296523ms) May 13 22:26:32.110: INFO: (10) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 4.537281ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.801477ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 4.73844ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.966959ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.95422ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.094814ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 5.0313ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 5.121192ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.152307ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 5.257173ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 5.307828ms) May 13 22:26:32.111: INFO: (10) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 3.270479ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 3.336303ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 2.703044ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 3.370101ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 3.62645ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 3.678542ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 3.340398ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 3.514172ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 3.442674ms) May 13 22:26:32.115: INFO: (11) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 3.586984ms) May 13 22:26:32.117: INFO: (12) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 1.939684ms) May 13 22:26:32.118: INFO: (12) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 2.04165ms) May 13 22:26:32.120: INFO: (12) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.634095ms) May 13 22:26:32.120: INFO: (12) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.687094ms) May 13 22:26:32.120: INFO: (12) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.707339ms) May 13 22:26:32.121: INFO: (12) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test (200; 5.285494ms) May 13 22:26:32.121: INFO: (12) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 5.215765ms) May 13 22:26:32.121: INFO: (12) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.329599ms) May 13 22:26:32.124: INFO: (13) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 2.608078ms) May 13 22:26:32.124: INFO: (13) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 3.082871ms) May 13 22:26:32.125: INFO: (13) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 4.328805ms) May 13 22:26:32.125: INFO: (13) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 4.339827ms) May 13 22:26:32.125: INFO: (13) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.446528ms) May 13 22:26:32.125: INFO: (13) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.377882ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.43232ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 4.516555ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 4.549964ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.607909ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.634145ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 4.625481ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 4.653499ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 4.729496ms) May 13 22:26:32.126: INFO: (13) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.409865ms) May 13 22:26:32.131: INFO: (14) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 4.600006ms) May 13 22:26:32.131: INFO: (14) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 4.750646ms) May 13 22:26:32.131: INFO: (14) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 5.249345ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 5.259671ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.382199ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 5.562819ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 5.563042ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 5.598567ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 5.593712ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 5.670462ms) May 13 22:26:32.132: INFO: (14) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 5.877785ms) May 13 22:26:32.133: INFO: (14) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 6.307364ms) May 13 22:26:32.133: INFO: (14) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 6.539299ms) May 13 22:26:32.137: INFO: (15) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 3.559802ms) May 13 22:26:32.137: INFO: (15) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 3.853085ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 4.370345ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 4.830546ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 4.799992ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 4.739015ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 4.797293ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 4.791228ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.858739ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 4.816914ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 5.102328ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 5.101089ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.183948ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 5.138591ms) May 13 22:26:32.138: INFO: (15) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.113614ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 3.069187ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 3.009637ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 3.783896ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 3.792195ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:1080/proxy/: ... (200; 3.917886ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 3.813428ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 3.804231ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 3.894936ms) May 13 22:26:32.142: INFO: (16) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 5.040163ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 5.061454ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 5.226818ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.227802ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 5.378309ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 5.530175ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 5.769718ms) May 13 22:26:32.155: INFO: (17) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 5.727291ms) May 13 22:26:32.156: INFO: (17) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 6.242219ms) May 13 22:26:32.156: INFO: (17) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 6.185119ms) May 13 22:26:32.156: INFO: (17) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 6.293307ms) May 13 22:26:32.156: INFO: (17) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 13.00012ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 13.041731ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 12.921839ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 12.946866ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 12.98392ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:462/proxy/: tls qux (200; 13.073234ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:1080/proxy/: test<... (200; 12.99217ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 13.017539ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 12.995558ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 13.064743ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:160/proxy/: foo (200; 13.053781ms) May 13 22:26:32.169: INFO: (18) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 13.153971ms) May 13 22:26:32.170: INFO: (18) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 13.250539ms) May 13 22:26:32.170: INFO: (18) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 13.357392ms) May 13 22:26:32.170: INFO: (18) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: test<... (200; 4.48643ms) May 13 22:26:32.174: INFO: (19) /api/v1/namespaces/proxy-1243/pods/http:proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 4.414442ms) May 13 22:26:32.175: INFO: (19) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname1/proxy/: foo (200; 4.638619ms) May 13 22:26:32.175: INFO: (19) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:443/proxy/: ... (200; 4.858243ms) May 13 22:26:32.175: INFO: (19) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9:162/proxy/: bar (200; 5.107495ms) May 13 22:26:32.175: INFO: (19) /api/v1/namespaces/proxy-1243/pods/https:proxy-service-z7h6j-pknx9:460/proxy/: tls baz (200; 5.204274ms) May 13 22:26:32.175: INFO: (19) /api/v1/namespaces/proxy-1243/pods/proxy-service-z7h6j-pknx9/proxy/: test (200; 5.345588ms) May 13 22:26:32.185: INFO: (19) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname2/proxy/: bar (200; 15.551895ms) May 13 22:26:32.186: INFO: (19) /api/v1/namespaces/proxy-1243/services/http:proxy-service-z7h6j:portname1/proxy/: foo (200; 15.615269ms) May 13 22:26:32.186: INFO: (19) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname1/proxy/: tls baz (200; 15.715466ms) May 13 22:26:32.186: INFO: (19) /api/v1/namespaces/proxy-1243/services/https:proxy-service-z7h6j:tlsportname2/proxy/: tls qux (200; 15.712423ms) May 13 22:26:32.186: INFO: (19) /api/v1/namespaces/proxy-1243/services/proxy-service-z7h6j:portname2/proxy/: bar (200; 15.723933ms) STEP: deleting ReplicationController proxy-service-z7h6j in namespace proxy-1243, will wait for the garbage collector to delete the pods May 13 22:26:32.274: INFO: Deleting ReplicationController proxy-service-z7h6j took: 26.904766ms May 13 22:26:32.574: INFO: Terminating ReplicationController proxy-service-z7h6j pods took: 300.19705ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:26:35.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1243" for this suite. • [SLOW TEST:15.887 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":232,"skipped":3810,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:26:35.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:26:36.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:26:38.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:26:41.724: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:26:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4846" for this suite. STEP: Destroying namespace "webhook-4846-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.424 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":233,"skipped":3812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:26:52.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 22:26:52.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4507' May 13 22:26:56.009: INFO: stderr: "" May 13 22:26:56.009: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 13 22:26:56.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4507' May 13 22:27:00.310: INFO: stderr: "" May 13 22:27:00.310: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:00.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4507" for this suite. • [SLOW TEST:8.272 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":234,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:00.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 22:27:00.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5479' May 13 22:27:00.666: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 13 22:27:00.666: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 13 22:27:00.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5479' May 13 22:27:00.889: INFO: stderr: "" May 13 22:27:00.889: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:00.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5479" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":235,"skipped":3893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 13 22:27:01.026: INFO: Waiting up to 5m0s for pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab" in namespace "containers-2100" to be "success or failure" May 13 22:27:01.092: INFO: Pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab": Phase="Pending", Reason="", readiness=false. Elapsed: 65.443751ms May 13 22:27:03.218: INFO: Pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191224864s May 13 22:27:05.558: INFO: Pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531642706s May 13 22:27:07.562: INFO: Pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.535463004s STEP: Saw pod success May 13 22:27:07.562: INFO: Pod "client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab" satisfied condition "success or failure" May 13 22:27:07.565: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab container test-container: STEP: delete the pod May 13 22:27:07.612: INFO: Waiting for pod client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab to disappear May 13 22:27:07.617: INFO: Pod client-containers-d0e68055-b6a6-48c3-a3ab-8c528e55acab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:07.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2100" for this suite. • [SLOW TEST:6.781 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:07.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-6de9cef7-b7b9-4042-b102-f8c745ac2d01 STEP: Creating a pod to test consume secrets May 13 22:27:07.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34" in namespace "projected-7674" to be "success or failure" May 13 22:27:07.863: INFO: Pod "pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34": Phase="Pending", Reason="", readiness=false. Elapsed: 10.28214ms May 13 22:27:09.869: INFO: Pod "pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016386756s May 13 22:27:11.973: INFO: Pod "pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119816737s STEP: Saw pod success May 13 22:27:11.973: INFO: Pod "pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34" satisfied condition "success or failure" May 13 22:27:11.976: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34 container projected-secret-volume-test: STEP: delete the pod May 13 22:27:12.251: INFO: Waiting for pod pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34 to disappear May 13 22:27:12.265: INFO: Pod pod-projected-secrets-a0164510-91ce-4064-9354-3606bf775b34 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:12.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7674" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3955,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:12.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:27:12.440: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 22:27:15.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8980 create -f -' May 13 22:27:18.655: INFO: stderr: "" May 13 22:27:18.655: INFO: stdout: "e2e-test-crd-publish-openapi-3731-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 13 22:27:18.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8980 delete e2e-test-crd-publish-openapi-3731-crds test-cr' May 13 22:27:18.760: INFO: stderr: "" May 13 22:27:18.760: INFO: stdout: "e2e-test-crd-publish-openapi-3731-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 13 22:27:18.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8980 apply -f -' May 13 22:27:19.001: INFO: stderr: "" May 13 22:27:19.001: INFO: stdout: "e2e-test-crd-publish-openapi-3731-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 13 22:27:19.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8980 delete e2e-test-crd-publish-openapi-3731-crds test-cr' May 13 22:27:19.112: INFO: stderr: "" May 13 22:27:19.112: INFO: stdout: "e2e-test-crd-publish-openapi-3731-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 13 22:27:19.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3731-crds' May 13 22:27:19.386: INFO: stderr: "" May 13 22:27:19.386: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3731-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:22.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8980" for this suite. • [SLOW TEST:10.044 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":238,"skipped":3974,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:22.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3153 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3153 STEP: Deleting pre-stop pod May 13 22:27:35.566: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:35.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3153" for this suite. • [SLOW TEST:13.265 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":239,"skipped":3992,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:35.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:27:35.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8" in namespace "projected-5988" to be "success or failure" May 13 22:27:35.696: INFO: Pod "downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.888461ms May 13 22:27:37.709: INFO: Pod "downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034368349s May 13 22:27:39.713: INFO: Pod "downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037943733s STEP: Saw pod success May 13 22:27:39.713: INFO: Pod "downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8" satisfied condition "success or failure" May 13 22:27:39.715: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8 container client-container: STEP: delete the pod May 13 22:27:39.752: INFO: Waiting for pod downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8 to disappear May 13 22:27:39.907: INFO: Pod downwardapi-volume-a6b8f89b-bfd7-42c1-93e0-02fa98e1e6e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:39.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5988" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3996,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:39.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 13 22:27:44.150: INFO: Pod pod-hostip-aebd5331-2b7b-4e6b-b767-1e1128dd07b8 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:44.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6447" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:44.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 13 22:27:44.266: INFO: Waiting up to 5m0s for pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7" in namespace "downward-api-2070" to be "success or failure" May 13 22:27:44.270: INFO: Pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024016ms May 13 22:27:46.274: INFO: Pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007648792s May 13 22:27:48.278: INFO: Pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011544778s May 13 22:27:50.281: INFO: Pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014583973s STEP: Saw pod success May 13 22:27:50.281: INFO: Pod "downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7" satisfied condition "success or failure" May 13 22:27:50.283: INFO: Trying to get logs from node jerma-worker pod downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7 container dapi-container: STEP: delete the pod May 13 22:27:50.308: INFO: Waiting for pod downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7 to disappear May 13 22:27:50.312: INFO: Pod downward-api-a6b102d9-e9e9-4839-9379-65d6532102a7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:27:50.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2070" for this suite. • [SLOW TEST:6.159 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4025,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:27:50.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 13 22:28:01.070: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.070: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.106205 6 log.go:172] (0xc006158370) (0xc001d84d20) Create stream I0513 22:28:01.106230 6 log.go:172] (0xc006158370) (0xc001d84d20) Stream added, broadcasting: 1 I0513 22:28:01.107669 6 log.go:172] (0xc006158370) Reply frame received for 1 I0513 22:28:01.107702 6 log.go:172] (0xc006158370) (0xc00165a000) Create stream I0513 22:28:01.107712 6 log.go:172] (0xc006158370) (0xc00165a000) Stream added, broadcasting: 3 I0513 22:28:01.108344 6 log.go:172] (0xc006158370) Reply frame received for 3 I0513 22:28:01.108381 6 log.go:172] (0xc006158370) (0xc00192a000) Create stream I0513 22:28:01.108390 6 log.go:172] (0xc006158370) (0xc00192a000) Stream added, broadcasting: 5 I0513 22:28:01.109035 6 log.go:172] (0xc006158370) Reply frame received for 5 I0513 22:28:01.181810 6 log.go:172] (0xc006158370) Data frame received for 3 I0513 22:28:01.181839 6 log.go:172] (0xc00165a000) (3) Data frame handling I0513 22:28:01.181863 6 log.go:172] (0xc00165a000) (3) Data frame sent I0513 22:28:01.181874 6 log.go:172] (0xc006158370) Data frame received for 3 I0513 22:28:01.181883 6 log.go:172] (0xc00165a000) (3) Data frame handling I0513 22:28:01.181995 6 log.go:172] (0xc006158370) Data frame received for 5 I0513 22:28:01.182011 6 log.go:172] (0xc00192a000) (5) Data frame handling I0513 22:28:01.183094 6 log.go:172] (0xc006158370) Data frame received for 1 I0513 22:28:01.183107 6 log.go:172] (0xc001d84d20) (1) Data frame handling I0513 22:28:01.183116 6 log.go:172] (0xc001d84d20) (1) Data frame sent I0513 22:28:01.183132 6 log.go:172] (0xc006158370) (0xc001d84d20) Stream removed, broadcasting: 1 I0513 22:28:01.183198 6 log.go:172] (0xc006158370) (0xc001d84d20) Stream removed, broadcasting: 1 I0513 22:28:01.183216 6 log.go:172] (0xc006158370) (0xc00165a000) Stream removed, broadcasting: 3 I0513 22:28:01.183229 6 log.go:172] (0xc006158370) (0xc00192a000) Stream removed, broadcasting: 5 May 13 22:28:01.183: INFO: Exec stderr: "" May 13 22:28:01.183: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.183: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.184968 6 log.go:172] (0xc006158370) Go away received I0513 22:28:01.214730 6 log.go:172] (0xc0068740b0) (0xc00192a1e0) Create stream I0513 22:28:01.214759 6 log.go:172] (0xc0068740b0) (0xc00192a1e0) Stream added, broadcasting: 1 I0513 22:28:01.216236 6 log.go:172] (0xc0068740b0) Reply frame received for 1 I0513 22:28:01.216272 6 log.go:172] (0xc0068740b0) (0xc00261a8c0) Create stream I0513 22:28:01.216282 6 log.go:172] (0xc0068740b0) (0xc00261a8c0) Stream added, broadcasting: 3 I0513 22:28:01.216974 6 log.go:172] (0xc0068740b0) Reply frame received for 3 I0513 22:28:01.217021 6 log.go:172] (0xc0068740b0) (0xc00165a0a0) Create stream I0513 22:28:01.217036 6 log.go:172] (0xc0068740b0) (0xc00165a0a0) Stream added, broadcasting: 5 I0513 22:28:01.217926 6 log.go:172] (0xc0068740b0) Reply frame received for 5 I0513 22:28:01.291054 6 log.go:172] (0xc0068740b0) Data frame received for 3 I0513 22:28:01.291078 6 log.go:172] (0xc00261a8c0) (3) Data frame handling I0513 22:28:01.291104 6 log.go:172] (0xc00261a8c0) (3) Data frame sent I0513 22:28:01.291122 6 log.go:172] (0xc0068740b0) Data frame received for 3 I0513 22:28:01.291132 6 log.go:172] (0xc00261a8c0) (3) Data frame handling I0513 22:28:01.291149 6 log.go:172] (0xc0068740b0) Data frame received for 5 I0513 22:28:01.291169 6 log.go:172] (0xc00165a0a0) (5) Data frame handling I0513 22:28:01.292286 6 log.go:172] (0xc0068740b0) Data frame received for 1 I0513 22:28:01.292321 6 log.go:172] (0xc00192a1e0) (1) Data frame handling I0513 22:28:01.292346 6 log.go:172] (0xc00192a1e0) (1) Data frame sent I0513 22:28:01.292364 6 log.go:172] (0xc0068740b0) (0xc00192a1e0) Stream removed, broadcasting: 1 I0513 22:28:01.292386 6 log.go:172] (0xc0068740b0) Go away received I0513 22:28:01.292471 6 log.go:172] (0xc0068740b0) (0xc00192a1e0) Stream removed, broadcasting: 1 I0513 22:28:01.292521 6 log.go:172] (0xc0068740b0) (0xc00261a8c0) Stream removed, broadcasting: 3 I0513 22:28:01.292556 6 log.go:172] (0xc0068740b0) (0xc00165a0a0) Stream removed, broadcasting: 5 May 13 22:28:01.292: INFO: Exec stderr: "" May 13 22:28:01.292: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.292: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.326070 6 log.go:172] (0xc0060d0840) (0xc0022355e0) Create stream I0513 22:28:01.326088 6 log.go:172] (0xc0060d0840) (0xc0022355e0) Stream added, broadcasting: 1 I0513 22:28:01.327719 6 log.go:172] (0xc0060d0840) Reply frame received for 1 I0513 22:28:01.327758 6 log.go:172] (0xc0060d0840) (0xc00261a960) Create stream I0513 22:28:01.327772 6 log.go:172] (0xc0060d0840) (0xc00261a960) Stream added, broadcasting: 3 I0513 22:28:01.328644 6 log.go:172] (0xc0060d0840) Reply frame received for 3 I0513 22:28:01.328695 6 log.go:172] (0xc0060d0840) (0xc002235680) Create stream I0513 22:28:01.328713 6 log.go:172] (0xc0060d0840) (0xc002235680) Stream added, broadcasting: 5 I0513 22:28:01.329875 6 log.go:172] (0xc0060d0840) Reply frame received for 5 I0513 22:28:01.393066 6 log.go:172] (0xc0060d0840) Data frame received for 5 I0513 22:28:01.393098 6 log.go:172] (0xc002235680) (5) Data frame handling I0513 22:28:01.393273 6 log.go:172] (0xc0060d0840) Data frame received for 3 I0513 22:28:01.393299 6 log.go:172] (0xc00261a960) (3) Data frame handling I0513 22:28:01.393319 6 log.go:172] (0xc00261a960) (3) Data frame sent I0513 22:28:01.393329 6 log.go:172] (0xc0060d0840) Data frame received for 3 I0513 22:28:01.393337 6 log.go:172] (0xc00261a960) (3) Data frame handling I0513 22:28:01.394460 6 log.go:172] (0xc0060d0840) Data frame received for 1 I0513 22:28:01.394511 6 log.go:172] (0xc0022355e0) (1) Data frame handling I0513 22:28:01.394543 6 log.go:172] (0xc0022355e0) (1) Data frame sent I0513 22:28:01.394692 6 log.go:172] (0xc0060d0840) (0xc0022355e0) Stream removed, broadcasting: 1 I0513 22:28:01.394787 6 log.go:172] (0xc0060d0840) (0xc0022355e0) Stream removed, broadcasting: 1 I0513 22:28:01.394802 6 log.go:172] (0xc0060d0840) (0xc00261a960) Stream removed, broadcasting: 3 I0513 22:28:01.394814 6 log.go:172] (0xc0060d0840) (0xc002235680) Stream removed, broadcasting: 5 May 13 22:28:01.394: INFO: Exec stderr: "" May 13 22:28:01.394: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.394: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.394913 6 log.go:172] (0xc0060d0840) Go away received I0513 22:28:01.429023 6 log.go:172] (0xc006932370) (0xc00165afa0) Create stream I0513 22:28:01.429048 6 log.go:172] (0xc006932370) (0xc00165afa0) Stream added, broadcasting: 1 I0513 22:28:01.430944 6 log.go:172] (0xc006932370) Reply frame received for 1 I0513 22:28:01.430965 6 log.go:172] (0xc006932370) (0xc00192a320) Create stream I0513 22:28:01.430982 6 log.go:172] (0xc006932370) (0xc00192a320) Stream added, broadcasting: 3 I0513 22:28:01.431779 6 log.go:172] (0xc006932370) Reply frame received for 3 I0513 22:28:01.431821 6 log.go:172] (0xc006932370) (0xc002235720) Create stream I0513 22:28:01.431838 6 log.go:172] (0xc006932370) (0xc002235720) Stream added, broadcasting: 5 I0513 22:28:01.432707 6 log.go:172] (0xc006932370) Reply frame received for 5 I0513 22:28:01.487380 6 log.go:172] (0xc006932370) Data frame received for 3 I0513 22:28:01.487404 6 log.go:172] (0xc00192a320) (3) Data frame handling I0513 22:28:01.487414 6 log.go:172] (0xc00192a320) (3) Data frame sent I0513 22:28:01.487421 6 log.go:172] (0xc006932370) Data frame received for 3 I0513 22:28:01.487431 6 log.go:172] (0xc00192a320) (3) Data frame handling I0513 22:28:01.487444 6 log.go:172] (0xc006932370) Data frame received for 5 I0513 22:28:01.487453 6 log.go:172] (0xc002235720) (5) Data frame handling I0513 22:28:01.488508 6 log.go:172] (0xc006932370) Data frame received for 1 I0513 22:28:01.488537 6 log.go:172] (0xc00165afa0) (1) Data frame handling I0513 22:28:01.488558 6 log.go:172] (0xc00165afa0) (1) Data frame sent I0513 22:28:01.488591 6 log.go:172] (0xc006932370) (0xc00165afa0) Stream removed, broadcasting: 1 I0513 22:28:01.488626 6 log.go:172] (0xc006932370) Go away received I0513 22:28:01.488718 6 log.go:172] (0xc006932370) (0xc00165afa0) Stream removed, broadcasting: 1 I0513 22:28:01.488733 6 log.go:172] (0xc006932370) (0xc00192a320) Stream removed, broadcasting: 3 I0513 22:28:01.488745 6 log.go:172] (0xc006932370) (0xc002235720) Stream removed, broadcasting: 5 May 13 22:28:01.488: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 13 22:28:01.488: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.488: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.516131 6 log.go:172] (0xc006158c60) (0xc001d85180) Create stream I0513 22:28:01.516161 6 log.go:172] (0xc006158c60) (0xc001d85180) Stream added, broadcasting: 1 I0513 22:28:01.517662 6 log.go:172] (0xc006158c60) Reply frame received for 1 I0513 22:28:01.517689 6 log.go:172] (0xc006158c60) (0xc00165b180) Create stream I0513 22:28:01.517708 6 log.go:172] (0xc006158c60) (0xc00165b180) Stream added, broadcasting: 3 I0513 22:28:01.518326 6 log.go:172] (0xc006158c60) Reply frame received for 3 I0513 22:28:01.518350 6 log.go:172] (0xc006158c60) (0xc001d85220) Create stream I0513 22:28:01.518359 6 log.go:172] (0xc006158c60) (0xc001d85220) Stream added, broadcasting: 5 I0513 22:28:01.518962 6 log.go:172] (0xc006158c60) Reply frame received for 5 I0513 22:28:01.586564 6 log.go:172] (0xc006158c60) Data frame received for 5 I0513 22:28:01.586580 6 log.go:172] (0xc001d85220) (5) Data frame handling I0513 22:28:01.586591 6 log.go:172] (0xc006158c60) Data frame received for 3 I0513 22:28:01.586598 6 log.go:172] (0xc00165b180) (3) Data frame handling I0513 22:28:01.586605 6 log.go:172] (0xc00165b180) (3) Data frame sent I0513 22:28:01.586610 6 log.go:172] (0xc006158c60) Data frame received for 3 I0513 22:28:01.586615 6 log.go:172] (0xc00165b180) (3) Data frame handling I0513 22:28:01.587957 6 log.go:172] (0xc006158c60) Data frame received for 1 I0513 22:28:01.587988 6 log.go:172] (0xc001d85180) (1) Data frame handling I0513 22:28:01.588015 6 log.go:172] (0xc001d85180) (1) Data frame sent I0513 22:28:01.588039 6 log.go:172] (0xc006158c60) (0xc001d85180) Stream removed, broadcasting: 1 I0513 22:28:01.588151 6 log.go:172] (0xc006158c60) Go away received I0513 22:28:01.588172 6 log.go:172] (0xc006158c60) (0xc001d85180) Stream removed, broadcasting: 1 I0513 22:28:01.588185 6 log.go:172] (0xc006158c60) (0xc00165b180) Stream removed, broadcasting: 3 I0513 22:28:01.588192 6 log.go:172] (0xc006158c60) (0xc001d85220) Stream removed, broadcasting: 5 May 13 22:28:01.588: INFO: Exec stderr: "" May 13 22:28:01.588: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.588: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.611256 6 log.go:172] (0xc006874630) (0xc00192a780) Create stream I0513 22:28:01.611273 6 log.go:172] (0xc006874630) (0xc00192a780) Stream added, broadcasting: 1 I0513 22:28:01.612944 6 log.go:172] (0xc006874630) Reply frame received for 1 I0513 22:28:01.612992 6 log.go:172] (0xc006874630) (0xc00261aa00) Create stream I0513 22:28:01.613009 6 log.go:172] (0xc006874630) (0xc00261aa00) Stream added, broadcasting: 3 I0513 22:28:01.614234 6 log.go:172] (0xc006874630) Reply frame received for 3 I0513 22:28:01.614250 6 log.go:172] (0xc006874630) (0xc00192a8c0) Create stream I0513 22:28:01.614260 6 log.go:172] (0xc006874630) (0xc00192a8c0) Stream added, broadcasting: 5 I0513 22:28:01.615070 6 log.go:172] (0xc006874630) Reply frame received for 5 I0513 22:28:01.654583 6 log.go:172] (0xc006874630) Data frame received for 3 I0513 22:28:01.654623 6 log.go:172] (0xc00261aa00) (3) Data frame handling I0513 22:28:01.654648 6 log.go:172] (0xc00261aa00) (3) Data frame sent I0513 22:28:01.654668 6 log.go:172] (0xc006874630) Data frame received for 5 I0513 22:28:01.654678 6 log.go:172] (0xc00192a8c0) (5) Data frame handling I0513 22:28:01.654904 6 log.go:172] (0xc006874630) Data frame received for 3 I0513 22:28:01.654934 6 log.go:172] (0xc00261aa00) (3) Data frame handling I0513 22:28:01.656067 6 log.go:172] (0xc006874630) Data frame received for 1 I0513 22:28:01.656077 6 log.go:172] (0xc00192a780) (1) Data frame handling I0513 22:28:01.656082 6 log.go:172] (0xc00192a780) (1) Data frame sent I0513 22:28:01.656141 6 log.go:172] (0xc006874630) (0xc00192a780) Stream removed, broadcasting: 1 I0513 22:28:01.656170 6 log.go:172] (0xc006874630) (0xc00192a780) Stream removed, broadcasting: 1 I0513 22:28:01.656176 6 log.go:172] (0xc006874630) (0xc00261aa00) Stream removed, broadcasting: 3 I0513 22:28:01.656243 6 log.go:172] (0xc006874630) (0xc00192a8c0) Stream removed, broadcasting: 5 May 13 22:28:01.656: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 13 22:28:01.656: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.656: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.656710 6 log.go:172] (0xc006874630) Go away received I0513 22:28:01.682993 6 log.go:172] (0xc005aa1c30) (0xc00261ae60) Create stream I0513 22:28:01.683018 6 log.go:172] (0xc005aa1c30) (0xc00261ae60) Stream added, broadcasting: 1 I0513 22:28:01.684520 6 log.go:172] (0xc005aa1c30) Reply frame received for 1 I0513 22:28:01.684545 6 log.go:172] (0xc005aa1c30) (0xc00261af00) Create stream I0513 22:28:01.684553 6 log.go:172] (0xc005aa1c30) (0xc00261af00) Stream added, broadcasting: 3 I0513 22:28:01.685551 6 log.go:172] (0xc005aa1c30) Reply frame received for 3 I0513 22:28:01.685582 6 log.go:172] (0xc005aa1c30) (0xc002235860) Create stream I0513 22:28:01.685600 6 log.go:172] (0xc005aa1c30) (0xc002235860) Stream added, broadcasting: 5 I0513 22:28:01.686420 6 log.go:172] (0xc005aa1c30) Reply frame received for 5 I0513 22:28:01.747573 6 log.go:172] (0xc005aa1c30) Data frame received for 5 I0513 22:28:01.747613 6 log.go:172] (0xc002235860) (5) Data frame handling I0513 22:28:01.747639 6 log.go:172] (0xc005aa1c30) Data frame received for 3 I0513 22:28:01.747647 6 log.go:172] (0xc00261af00) (3) Data frame handling I0513 22:28:01.747655 6 log.go:172] (0xc00261af00) (3) Data frame sent I0513 22:28:01.747662 6 log.go:172] (0xc005aa1c30) Data frame received for 3 I0513 22:28:01.747687 6 log.go:172] (0xc00261af00) (3) Data frame handling I0513 22:28:01.748817 6 log.go:172] (0xc005aa1c30) Data frame received for 1 I0513 22:28:01.748833 6 log.go:172] (0xc00261ae60) (1) Data frame handling I0513 22:28:01.748852 6 log.go:172] (0xc00261ae60) (1) Data frame sent I0513 22:28:01.748868 6 log.go:172] (0xc005aa1c30) (0xc00261ae60) Stream removed, broadcasting: 1 I0513 22:28:01.748987 6 log.go:172] (0xc005aa1c30) (0xc00261ae60) Stream removed, broadcasting: 1 I0513 22:28:01.749027 6 log.go:172] (0xc005aa1c30) Go away received I0513 22:28:01.749071 6 log.go:172] (0xc005aa1c30) (0xc00261af00) Stream removed, broadcasting: 3 I0513 22:28:01.749105 6 log.go:172] (0xc005aa1c30) (0xc002235860) Stream removed, broadcasting: 5 May 13 22:28:01.749: INFO: Exec stderr: "" May 13 22:28:01.749: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.749: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:01.768969 6 log.go:172] (0xc0060d0dc0) (0xc0022359a0) Create stream I0513 22:28:01.768993 6 log.go:172] (0xc0060d0dc0) (0xc0022359a0) Stream added, broadcasting: 1 I0513 22:28:01.770375 6 log.go:172] (0xc0060d0dc0) Reply frame received for 1 I0513 22:28:01.770396 6 log.go:172] (0xc0060d0dc0) (0xc00165b5e0) Create stream I0513 22:28:01.770405 6 log.go:172] (0xc0060d0dc0) (0xc00165b5e0) Stream added, broadcasting: 3 I0513 22:28:01.770969 6 log.go:172] (0xc0060d0dc0) Reply frame received for 3 I0513 22:28:01.770994 6 log.go:172] (0xc0060d0dc0) (0xc002235a40) Create stream I0513 22:28:01.771007 6 log.go:172] (0xc0060d0dc0) (0xc002235a40) Stream added, broadcasting: 5 I0513 22:28:01.771660 6 log.go:172] (0xc0060d0dc0) Reply frame received for 5 I0513 22:28:01.827393 6 log.go:172] (0xc0060d0dc0) Data frame received for 3 I0513 22:28:01.827423 6 log.go:172] (0xc00165b5e0) (3) Data frame handling I0513 22:28:01.827433 6 log.go:172] (0xc00165b5e0) (3) Data frame sent I0513 22:28:01.827440 6 log.go:172] (0xc0060d0dc0) Data frame received for 3 I0513 22:28:01.827453 6 log.go:172] (0xc00165b5e0) (3) Data frame handling I0513 22:28:01.827472 6 log.go:172] (0xc0060d0dc0) Data frame received for 5 I0513 22:28:01.827482 6 log.go:172] (0xc002235a40) (5) Data frame handling I0513 22:28:01.828649 6 log.go:172] (0xc0060d0dc0) Data frame received for 1 I0513 22:28:01.828672 6 log.go:172] (0xc0022359a0) (1) Data frame handling I0513 22:28:01.828689 6 log.go:172] (0xc0022359a0) (1) Data frame sent I0513 22:28:01.828703 6 log.go:172] (0xc0060d0dc0) (0xc0022359a0) Stream removed, broadcasting: 1 I0513 22:28:01.828795 6 log.go:172] (0xc0060d0dc0) (0xc0022359a0) Stream removed, broadcasting: 1 I0513 22:28:01.828826 6 log.go:172] (0xc0060d0dc0) (0xc00165b5e0) Stream removed, broadcasting: 3 I0513 22:28:01.828865 6 log.go:172] (0xc0060d0dc0) (0xc002235a40) Stream removed, broadcasting: 5 May 13 22:28:01.828: INFO: Exec stderr: "" I0513 22:28:01.828943 6 log.go:172] (0xc0060d0dc0) Go away received May 13 22:28:01.828: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:01.828: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:02.048299 6 log.go:172] (0xc006932a50) (0xc00165bf40) Create stream I0513 22:28:02.048338 6 log.go:172] (0xc006932a50) (0xc00165bf40) Stream added, broadcasting: 1 I0513 22:28:02.050299 6 log.go:172] (0xc006932a50) Reply frame received for 1 I0513 22:28:02.050334 6 log.go:172] (0xc006932a50) (0xc00192ab40) Create stream I0513 22:28:02.050350 6 log.go:172] (0xc006932a50) (0xc00192ab40) Stream added, broadcasting: 3 I0513 22:28:02.051226 6 log.go:172] (0xc006932a50) Reply frame received for 3 I0513 22:28:02.051256 6 log.go:172] (0xc006932a50) (0xc002235d60) Create stream I0513 22:28:02.051267 6 log.go:172] (0xc006932a50) (0xc002235d60) Stream added, broadcasting: 5 I0513 22:28:02.052258 6 log.go:172] (0xc006932a50) Reply frame received for 5 I0513 22:28:02.107316 6 log.go:172] (0xc006932a50) Data frame received for 5 I0513 22:28:02.107347 6 log.go:172] (0xc002235d60) (5) Data frame handling I0513 22:28:02.107365 6 log.go:172] (0xc006932a50) Data frame received for 3 I0513 22:28:02.107375 6 log.go:172] (0xc00192ab40) (3) Data frame handling I0513 22:28:02.107400 6 log.go:172] (0xc00192ab40) (3) Data frame sent I0513 22:28:02.107415 6 log.go:172] (0xc006932a50) Data frame received for 3 I0513 22:28:02.107422 6 log.go:172] (0xc00192ab40) (3) Data frame handling I0513 22:28:02.108373 6 log.go:172] (0xc006932a50) Data frame received for 1 I0513 22:28:02.108387 6 log.go:172] (0xc00165bf40) (1) Data frame handling I0513 22:28:02.108396 6 log.go:172] (0xc00165bf40) (1) Data frame sent I0513 22:28:02.108405 6 log.go:172] (0xc006932a50) (0xc00165bf40) Stream removed, broadcasting: 1 I0513 22:28:02.108472 6 log.go:172] (0xc006932a50) (0xc00165bf40) Stream removed, broadcasting: 1 I0513 22:28:02.108484 6 log.go:172] (0xc006932a50) (0xc00192ab40) Stream removed, broadcasting: 3 I0513 22:28:02.108622 6 log.go:172] (0xc006932a50) Go away received I0513 22:28:02.108689 6 log.go:172] (0xc006932a50) (0xc002235d60) Stream removed, broadcasting: 5 May 13 22:28:02.108: INFO: Exec stderr: "" May 13 22:28:02.108: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2334 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 13 22:28:02.108: INFO: >>> kubeConfig: /root/.kube/config I0513 22:28:02.140443 6 log.go:172] (0xc006874bb0) (0xc00192af00) Create stream I0513 22:28:02.140466 6 log.go:172] (0xc006874bb0) (0xc00192af00) Stream added, broadcasting: 1 I0513 22:28:02.141811 6 log.go:172] (0xc006874bb0) Reply frame received for 1 I0513 22:28:02.141832 6 log.go:172] (0xc006874bb0) (0xc0015fa5a0) Create stream I0513 22:28:02.141840 6 log.go:172] (0xc006874bb0) (0xc0015fa5a0) Stream added, broadcasting: 3 I0513 22:28:02.142485 6 log.go:172] (0xc006874bb0) Reply frame received for 3 I0513 22:28:02.142516 6 log.go:172] (0xc006874bb0) (0xc002235e00) Create stream I0513 22:28:02.142526 6 log.go:172] (0xc006874bb0) (0xc002235e00) Stream added, broadcasting: 5 I0513 22:28:02.143000 6 log.go:172] (0xc006874bb0) Reply frame received for 5 I0513 22:28:02.195168 6 log.go:172] (0xc006874bb0) Data frame received for 5 I0513 22:28:02.195269 6 log.go:172] (0xc002235e00) (5) Data frame handling I0513 22:28:02.195312 6 log.go:172] (0xc006874bb0) Data frame received for 3 I0513 22:28:02.195352 6 log.go:172] (0xc0015fa5a0) (3) Data frame handling I0513 22:28:02.195383 6 log.go:172] (0xc0015fa5a0) (3) Data frame sent I0513 22:28:02.195402 6 log.go:172] (0xc006874bb0) Data frame received for 3 I0513 22:28:02.195412 6 log.go:172] (0xc0015fa5a0) (3) Data frame handling I0513 22:28:02.196605 6 log.go:172] (0xc006874bb0) Data frame received for 1 I0513 22:28:02.196681 6 log.go:172] (0xc00192af00) (1) Data frame handling I0513 22:28:02.196709 6 log.go:172] (0xc00192af00) (1) Data frame sent I0513 22:28:02.196744 6 log.go:172] (0xc006874bb0) (0xc00192af00) Stream removed, broadcasting: 1 I0513 22:28:02.196799 6 log.go:172] (0xc006874bb0) Go away received I0513 22:28:02.197330 6 log.go:172] (0xc006874bb0) (0xc00192af00) Stream removed, broadcasting: 1 I0513 22:28:02.197368 6 log.go:172] (0xc006874bb0) (0xc0015fa5a0) Stream removed, broadcasting: 3 I0513 22:28:02.197402 6 log.go:172] (0xc006874bb0) (0xc002235e00) Stream removed, broadcasting: 5 May 13 22:28:02.197: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:28:02.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2334" for this suite. • [SLOW TEST:11.890 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4038,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:28:02.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 13 22:28:02.345: INFO: Waiting up to 5m0s for pod "pod-2b38f465-7a57-4134-a817-311bfffb767e" in namespace "emptydir-2501" to be "success or failure" May 13 22:28:02.355: INFO: Pod "pod-2b38f465-7a57-4134-a817-311bfffb767e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.870875ms May 13 22:28:04.440: INFO: Pod "pod-2b38f465-7a57-4134-a817-311bfffb767e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094962212s May 13 22:28:06.444: INFO: Pod "pod-2b38f465-7a57-4134-a817-311bfffb767e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098682793s STEP: Saw pod success May 13 22:28:06.444: INFO: Pod "pod-2b38f465-7a57-4134-a817-311bfffb767e" satisfied condition "success or failure" May 13 22:28:06.446: INFO: Trying to get logs from node jerma-worker pod pod-2b38f465-7a57-4134-a817-311bfffb767e container test-container: STEP: delete the pod May 13 22:28:06.464: INFO: Waiting for pod pod-2b38f465-7a57-4134-a817-311bfffb767e to disappear May 13 22:28:06.469: INFO: Pod pod-2b38f465-7a57-4134-a817-311bfffb767e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:28:06.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2501" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4046,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:28:06.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-z4vw STEP: Creating a pod to test atomic-volume-subpath May 13 22:28:06.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z4vw" in namespace "subpath-4402" to be "success or failure" May 13 22:28:06.883: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Pending", Reason="", readiness=false. Elapsed: 26.975789ms May 13 22:28:08.887: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030300464s May 13 22:28:10.891: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 4.034900807s May 13 22:28:12.896: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 6.039311319s May 13 22:28:14.899: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 8.04260593s May 13 22:28:16.902: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 10.046089981s May 13 22:28:18.906: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 12.049658836s May 13 22:28:20.911: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 14.054523987s May 13 22:28:22.916: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 16.059256568s May 13 22:28:24.920: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 18.063840437s May 13 22:28:26.924: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 20.067364507s May 13 22:28:28.927: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Running", Reason="", readiness=true. Elapsed: 22.07057858s May 13 22:28:30.931: INFO: Pod "pod-subpath-test-configmap-z4vw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074472972s STEP: Saw pod success May 13 22:28:30.931: INFO: Pod "pod-subpath-test-configmap-z4vw" satisfied condition "success or failure" May 13 22:28:30.934: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-z4vw container test-container-subpath-configmap-z4vw: STEP: delete the pod May 13 22:28:30.969: INFO: Waiting for pod pod-subpath-test-configmap-z4vw to disappear May 13 22:28:30.985: INFO: Pod pod-subpath-test-configmap-z4vw no longer exists STEP: Deleting pod pod-subpath-test-configmap-z4vw May 13 22:28:30.985: INFO: Deleting pod "pod-subpath-test-configmap-z4vw" in namespace "subpath-4402" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:28:30.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4402" for this suite. • [SLOW TEST:24.520 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":245,"skipped":4050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:28:31.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-4b79198d-e402-44f7-966d-7690283c24dc STEP: Creating a pod to test consume configMaps May 13 22:28:31.083: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9" in namespace "projected-5876" to be "success or failure" May 13 22:28:31.099: INFO: Pod "pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.207147ms May 13 22:28:33.103: INFO: Pod "pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020478024s May 13 22:28:35.107: INFO: Pod "pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024278801s STEP: Saw pod success May 13 22:28:35.107: INFO: Pod "pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9" satisfied condition "success or failure" May 13 22:28:35.110: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9 container projected-configmap-volume-test: STEP: delete the pod May 13 22:28:35.150: INFO: Waiting for pod pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9 to disappear May 13 22:28:35.158: INFO: Pod pod-projected-configmaps-0b309ccc-67ef-4979-8d85-169a93a039e9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:28:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5876" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4123,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:28:35.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:28:35.291: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 13 22:28:40.294: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 22:28:40.294: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 13 22:28:42.299: INFO: Creating deployment "test-rollover-deployment" May 13 22:28:42.314: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 13 22:28:44.320: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 13 22:28:44.327: INFO: Ensure that both replica sets have 1 created replica May 13 22:28:44.333: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 13 22:28:44.339: INFO: Updating deployment test-rollover-deployment May 13 22:28:44.339: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 13 22:28:46.384: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 13 22:28:46.389: INFO: Make sure deployment "test-rollover-deployment" is complete May 13 22:28:46.394: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:46.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005724, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:48.403: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:48.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005727, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:50.403: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:50.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005727, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:52.401: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:52.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005727, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:54.403: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:54.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005727, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:56.402: INFO: all replica sets need to contain the pod-template-hash label May 13 22:28:56.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005727, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725005722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:28:58.558: INFO: May 13 22:28:58.558: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 13 22:28:58.565: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-605 /apis/apps/v1/namespaces/deployment-605/deployments/test-rollover-deployment fd96916f-3083-4b09-aa36-a23b9f025c6d 15963296 2 2020-05-13 22:28:42 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00513ca88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-13 22:28:42 +0000 UTC,LastTransitionTime:2020-05-13 22:28:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-13 22:28:58 +0000 UTC,LastTransitionTime:2020-05-13 22:28:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 13 22:28:58.568: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-605 /apis/apps/v1/namespaces/deployment-605/replicasets/test-rollover-deployment-574d6dfbff 16043987-eec7-4c5b-bffb-5f332661220a 15963284 2 2020-05-13 22:28:44 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fd96916f-3083-4b09-aa36-a23b9f025c6d 0xc0050d2f57 0xc0050d2f58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050d2fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 22:28:58.568: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 13 22:28:58.568: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-605 /apis/apps/v1/namespaces/deployment-605/replicasets/test-rollover-controller db3b27c5-54f6-4620-8f3f-4e40edc399ab 15963294 2 2020-05-13 22:28:35 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fd96916f-3083-4b09-aa36-a23b9f025c6d 0xc0050d2e77 0xc0050d2e78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0050d2ee8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:28:58.568: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-605 /apis/apps/v1/namespaces/deployment-605/replicasets/test-rollover-deployment-f6c94f66c d4b5d4d3-2ef1-41b5-a5f7-fae860ac15a7 15963227 2 2020-05-13 22:28:42 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fd96916f-3083-4b09-aa36-a23b9f025c6d 0xc0050d3030 0xc0050d3031}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050d30a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:28:58.571: INFO: Pod "test-rollover-deployment-574d6dfbff-jmqxs" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-jmqxs test-rollover-deployment-574d6dfbff- deployment-605 /api/v1/namespaces/deployment-605/pods/test-rollover-deployment-574d6dfbff-jmqxs bd0c9223-0569-458d-a2f7-9a41a06468d8 15963245 0 2020-05-13 22:28:44 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 16043987-eec7-4c5b-bffb-5f332661220a 0xc0051afa87 0xc0051afa88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d42t8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d42t8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d42t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:28:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:28:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:28:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-13 22:28:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.50,StartTime:2020-05-13 22:28:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-13 22:28:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://3ab57d405a9b283ff5183baa19a701ad2161a721747ebe6e362e2385b303d084,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:28:58.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-605" for this suite. • [SLOW TEST:23.412 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":247,"skipped":4130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:28:58.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-vc6w STEP: Creating a pod to test atomic-volume-subpath May 13 22:28:58.719: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vc6w" in namespace "subpath-4556" to be "success or failure" May 13 22:28:58.722: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2052ms May 13 22:29:00.726: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007005139s May 13 22:29:02.730: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 4.011090243s May 13 22:29:04.824: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 6.10490221s May 13 22:29:06.829: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 8.110362306s May 13 22:29:08.834: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 10.114905934s May 13 22:29:10.838: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 12.118965474s May 13 22:29:12.841: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 14.122359869s May 13 22:29:14.845: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 16.125782873s May 13 22:29:16.848: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 18.128450009s May 13 22:29:18.852: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 20.133011336s May 13 22:29:20.857: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Running", Reason="", readiness=true. Elapsed: 22.137855629s May 13 22:29:22.861: INFO: Pod "pod-subpath-test-downwardapi-vc6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141900559s STEP: Saw pod success May 13 22:29:22.861: INFO: Pod "pod-subpath-test-downwardapi-vc6w" satisfied condition "success or failure" May 13 22:29:22.864: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-vc6w container test-container-subpath-downwardapi-vc6w: STEP: delete the pod May 13 22:29:23.004: INFO: Waiting for pod pod-subpath-test-downwardapi-vc6w to disappear May 13 22:29:23.007: INFO: Pod pod-subpath-test-downwardapi-vc6w no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vc6w May 13 22:29:23.007: INFO: Deleting pod "pod-subpath-test-downwardapi-vc6w" in namespace "subpath-4556" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:29:23.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4556" for this suite. • [SLOW TEST:24.438 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":248,"skipped":4166,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:29:23.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 13 22:29:29.850: INFO: 5 pods remaining May 13 22:29:29.850: INFO: 0 pods has nil DeletionTimestamp May 13 22:29:29.850: INFO: May 13 22:29:30.706: INFO: 0 pods remaining May 13 22:29:30.706: INFO: 0 pods has nil DeletionTimestamp May 13 22:29:30.706: INFO: STEP: Gathering metrics W0513 22:29:32.738822 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 13 22:29:32.738: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:29:32.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5164" for this suite. • [SLOW TEST:9.986 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":249,"skipped":4188,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:29:33.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 13 22:29:34.634: INFO: >>> kubeConfig: /root/.kube/config May 13 22:29:37.459: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:29:47.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1062" for this suite. • [SLOW TEST:15.041 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":250,"skipped":4195,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:29:48.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:29:48.412: INFO: Creating ReplicaSet my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4 May 13 22:29:48.423: INFO: Pod name my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4: Found 0 pods out of 1 May 13 22:29:53.428: INFO: Pod name my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4: Found 1 pods out of 1 May 13 22:29:53.428: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4" is running May 13 22:29:53.431: INFO: Pod "my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4-s9rm2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:29:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:29:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:29:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-13 22:29:48 +0000 UTC Reason: Message:}]) May 13 22:29:53.431: INFO: Trying to dial the pod May 13 22:29:58.442: INFO: Controller my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4: Got expected result from replica 1 [my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4-s9rm2]: "my-hostname-basic-a9aad84f-7e49-4626-9479-d410c684e4f4-s9rm2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:29:58.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4487" for this suite. • [SLOW TEST:10.404 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":251,"skipped":4200,"failed":0} [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:29:58.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 13 22:30:03.611: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:30:04.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9849" for this suite. • [SLOW TEST:6.185 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":252,"skipped":4200,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:30:04.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-38f1e3c5-9b5d-4466-9215-e610a72c8f3e in namespace container-probe-170 May 13 22:30:10.824: INFO: Started pod test-webserver-38f1e3c5-9b5d-4466-9215-e610a72c8f3e in namespace container-probe-170 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:30:10.827: INFO: Initial restart count of pod test-webserver-38f1e3c5-9b5d-4466-9215-e610a72c8f3e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:11.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-170" for this suite. • [SLOW TEST:246.891 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:11.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 13 22:34:11.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7399' May 13 22:34:11.815: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 13 22:34:11.816: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 13 22:34:14.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7399' May 13 22:34:14.150: INFO: stderr: "" May 13 22:34:14.150: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:14.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7399" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":254,"skipped":4233,"failed":0} ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:14.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-281810fe-27e6-4d30-8527-1d45ce4a07fd STEP: Creating a pod to test consume secrets May 13 22:34:14.808: INFO: Waiting up to 5m0s for pod "pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028" in namespace "secrets-3037" to be "success or failure" May 13 22:34:14.936: INFO: Pod "pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028": Phase="Pending", Reason="", readiness=false. Elapsed: 128.855696ms May 13 22:34:16.985: INFO: Pod "pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17748225s May 13 22:34:18.989: INFO: Pod "pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181691893s STEP: Saw pod success May 13 22:34:18.989: INFO: Pod "pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028" satisfied condition "success or failure" May 13 22:34:18.992: INFO: Trying to get logs from node jerma-worker pod pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028 container secret-env-test: STEP: delete the pod May 13 22:34:19.063: INFO: Waiting for pod pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028 to disappear May 13 22:34:19.110: INFO: Pod pod-secrets-12601889-d66b-4167-abf2-2bcceaa29028 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:19.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3037" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4233,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:19.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:19.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3288" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":256,"skipped":4248,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:19.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 13 22:34:19.345: INFO: Waiting up to 5m0s for pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc" in namespace "containers-2665" to be "success or failure" May 13 22:34:19.368: INFO: Pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.301518ms May 13 22:34:21.416: INFO: Pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070825496s May 13 22:34:23.421: INFO: Pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075602589s May 13 22:34:25.425: INFO: Pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079565856s STEP: Saw pod success May 13 22:34:25.425: INFO: Pod "client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc" satisfied condition "success or failure" May 13 22:34:25.428: INFO: Trying to get logs from node jerma-worker2 pod client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc container test-container: STEP: delete the pod May 13 22:34:25.464: INFO: Waiting for pod client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc to disappear May 13 22:34:25.482: INFO: Pod client-containers-24a4b8de-7622-4335-99c4-5fad8fcd3efc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:25.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2665" for this suite. • [SLOW TEST:6.286 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4262,"failed":0} [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:25.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:29.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5803" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:29.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:34:29.735: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:35.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1417" for this suite. • [SLOW TEST:5.857 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":259,"skipped":4308,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:35.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 13 22:34:40.113: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9728 pod-service-account-061a280c-670b-4bbc-9e7a-e7ca3f2401ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 13 22:34:40.328: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9728 pod-service-account-061a280c-670b-4bbc-9e7a-e7ca3f2401ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 13 22:34:40.519: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9728 pod-service-account-061a280c-670b-4bbc-9e7a-e7ca3f2401ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:34:40.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9728" for this suite. • [SLOW TEST:5.265 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":260,"skipped":4308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:34:40.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4968 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4968 STEP: Creating statefulset with conflicting port in namespace statefulset-4968 STEP: Waiting until pod test-pod will start running in namespace statefulset-4968 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4968 May 13 22:34:44.890: INFO: Observed stateful pod in namespace: statefulset-4968, name: ss-0, uid: bdeb34ba-81ce-49a7-967d-01a9ebbe4496, status phase: Pending. Waiting for statefulset controller to delete. May 13 22:34:45.272: INFO: Observed stateful pod in namespace: statefulset-4968, name: ss-0, uid: bdeb34ba-81ce-49a7-967d-01a9ebbe4496, status phase: Failed. Waiting for statefulset controller to delete. May 13 22:34:45.308: INFO: Observed stateful pod in namespace: statefulset-4968, name: ss-0, uid: bdeb34ba-81ce-49a7-967d-01a9ebbe4496, status phase: Failed. Waiting for statefulset controller to delete. May 13 22:34:45.316: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4968 STEP: Removing pod with conflicting port in namespace statefulset-4968 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4968 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 13 22:34:51.405: INFO: Deleting all statefulset in ns statefulset-4968 May 13 22:34:51.408: INFO: Scaling statefulset ss to 0 May 13 22:35:01.433: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:35:01.436: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:35:01.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4968" for this suite. • [SLOW TEST:20.721 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":261,"skipped":4342,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:35:01.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-102f163f-1dd8-42ed-98a7-290031ef35f9 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-102f163f-1dd8-42ed-98a7-290031ef35f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9242" for this suite. • [SLOW TEST:68.833 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4349,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:10.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 13 22:36:10.365: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:36:10.382: INFO: Waiting for terminating namespaces to be deleted... May 13 22:36:10.385: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 13 22:36:10.391: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:36:10.391: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:36:10.391: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:36:10.391: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:36:10.391: INFO: pod-configmaps-185f9519-ae24-4487-a082-46f68b7e5b63 from configmap-9242 started at 2020-05-13 22:35:01 +0000 UTC (1 container statuses recorded) May 13 22:36:10.391: INFO: Container configmap-volume-test ready: true, restart count 0 May 13 22:36:10.391: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 13 22:36:10.411: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:36:10.411: INFO: Container kube-proxy ready: true, restart count 0 May 13 22:36:10.411: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 13 22:36:10.411: INFO: Container kube-hunter ready: false, restart count 0 May 13 22:36:10.411: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 13 22:36:10.411: INFO: Container kindnet-cni ready: true, restart count 0 May 13 22:36:10.411: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 13 22:36:10.411: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160eb766bf1d6c0d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:11.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-975" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":263,"skipped":4350,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:11.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 13 22:36:11.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9458' May 13 22:36:11.838: INFO: stderr: "" May 13 22:36:11.838: INFO: stdout: "pod/pause created\n" May 13 22:36:11.838: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 13 22:36:11.838: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9458" to be "running and ready" May 13 22:36:11.852: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.372558ms May 13 22:36:13.856: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018470765s May 13 22:36:15.861: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.022651608s May 13 22:36:15.861: INFO: Pod "pause" satisfied condition "running and ready" May 13 22:36:15.861: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 13 22:36:15.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9458' May 13 22:36:15.966: INFO: stderr: "" May 13 22:36:15.966: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 13 22:36:15.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9458' May 13 22:36:16.097: INFO: stderr: "" May 13 22:36:16.097: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 13 22:36:16.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9458' May 13 22:36:16.229: INFO: stderr: "" May 13 22:36:16.229: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 13 22:36:16.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9458' May 13 22:36:16.353: INFO: stderr: "" May 13 22:36:16.353: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 13 22:36:16.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9458' May 13 22:36:16.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:36:16.615: INFO: stdout: "pod \"pause\" force deleted\n" May 13 22:36:16.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9458' May 13 22:36:16.914: INFO: stderr: "No resources found in kubectl-9458 namespace.\n" May 13 22:36:16.914: INFO: stdout: "" May 13 22:36:16.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9458 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 22:36:17.210: INFO: stderr: "" May 13 22:36:17.210: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:17.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9458" for this suite. • [SLOW TEST:5.981 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":264,"skipped":4352,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:17.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:36:17.611: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 13 22:36:18.887: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:19.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2818" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":265,"skipped":4370,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:19.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8985/configmap-test-a6f8676b-3f5f-4c27-a2b7-19220f7d477e STEP: Creating a pod to test consume configMaps May 13 22:36:21.197: INFO: Waiting up to 5m0s for pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4" in namespace "configmap-8985" to be "success or failure" May 13 22:36:21.367: INFO: Pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4": Phase="Pending", Reason="", readiness=false. Elapsed: 169.636119ms May 13 22:36:23.371: INFO: Pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173440651s May 13 22:36:25.502: INFO: Pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4": Phase="Running", Reason="", readiness=true. Elapsed: 4.304591917s May 13 22:36:27.506: INFO: Pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.308495396s STEP: Saw pod success May 13 22:36:27.506: INFO: Pod "pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4" satisfied condition "success or failure" May 13 22:36:27.508: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4 container env-test: STEP: delete the pod May 13 22:36:27.524: INFO: Waiting for pod pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4 to disappear May 13 22:36:27.529: INFO: Pod pod-configmaps-eefee1fd-5e61-4abe-acc5-fbc4e47c04c4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:27.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8985" for this suite. • [SLOW TEST:7.626 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4371,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:27.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-2342befb-d0b3-4e17-98e2-5354b2cc9a89 STEP: Creating configMap with name cm-test-opt-upd-f7334bc9-56b6-44fb-a2bb-1609eb65765f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2342befb-d0b3-4e17-98e2-5354b2cc9a89 STEP: Updating configmap cm-test-opt-upd-f7334bc9-56b6-44fb-a2bb-1609eb65765f STEP: Creating configMap with name cm-test-opt-create-9bb751cd-7319-4a0f-935f-32e54a62f9fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:35.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7040" for this suite. • [SLOW TEST:8.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4381,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:35.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:36:36.473: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:36:38.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006196, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006196, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006196, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006196, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:36:41.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:36:41.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5663-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:42.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6460" for this suite. STEP: Destroying namespace "webhook-6460-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.196 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":268,"skipped":4385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:43.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:36:43.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c" in namespace "projected-4363" to be "success or failure" May 13 22:36:43.117: INFO: Pod "downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92732ms May 13 22:36:45.128: INFO: Pod "downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014429028s May 13 22:36:47.131: INFO: Pod "downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017897286s STEP: Saw pod success May 13 22:36:47.131: INFO: Pod "downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c" satisfied condition "success or failure" May 13 22:36:47.137: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c container client-container: STEP: delete the pod May 13 22:36:47.495: INFO: Waiting for pod downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c to disappear May 13 22:36:47.498: INFO: Pod downwardapi-volume-0f085121-ed16-4b7b-be75-7261dc0f308c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:47.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4363" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4418,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:47.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 13 22:36:47.856: INFO: Waiting up to 5m0s for pod "pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa" in namespace "emptydir-4901" to be "success or failure" May 13 22:36:47.879: INFO: Pod "pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066661ms May 13 22:36:50.011: INFO: Pod "pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154109163s May 13 22:36:52.014: INFO: Pod "pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157979497s STEP: Saw pod success May 13 22:36:52.014: INFO: Pod "pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa" satisfied condition "success or failure" May 13 22:36:52.018: INFO: Trying to get logs from node jerma-worker pod pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa container test-container: STEP: delete the pod May 13 22:36:52.163: INFO: Waiting for pod pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa to disappear May 13 22:36:52.202: INFO: Pod pod-479b4eee-eefc-4da9-99ac-e0d3b0ea6cfa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:52.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4901" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4431,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:52.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:36:52.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa" in namespace "downward-api-7929" to be "success or failure" May 13 22:36:52.379: INFO: Pod "downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 17.592786ms May 13 22:36:54.418: INFO: Pod "downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056598877s May 13 22:36:56.423: INFO: Pod "downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060872706s STEP: Saw pod success May 13 22:36:56.423: INFO: Pod "downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa" satisfied condition "success or failure" May 13 22:36:56.425: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa container client-container: STEP: delete the pod May 13 22:36:56.451: INFO: Waiting for pod downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa to disappear May 13 22:36:56.480: INFO: Pod downwardapi-volume-7d03d2d3-1af8-4db7-b3c2-f7bc89eaa9aa no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:36:56.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7929" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4436,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:36:56.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ca3edc48-2981-47fd-b4ce-d7f11192764b STEP: Creating a pod to test consume configMaps May 13 22:36:56.788: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5" in namespace "projected-5350" to be "success or failure" May 13 22:36:56.939: INFO: Pod "pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 151.001697ms May 13 22:36:58.943: INFO: Pod "pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154199875s May 13 22:37:00.946: INFO: Pod "pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157597935s STEP: Saw pod success May 13 22:37:00.946: INFO: Pod "pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5" satisfied condition "success or failure" May 13 22:37:00.948: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5 container projected-configmap-volume-test: STEP: delete the pod May 13 22:37:01.144: INFO: Waiting for pod pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5 to disappear May 13 22:37:01.180: INFO: Pod pod-projected-configmaps-746497e2-405b-40a5-9c37-2781094e7cc5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:01.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5350" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4442,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:01.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:37:01.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521" in namespace "downward-api-125" to be "success or failure" May 13 22:37:01.534: INFO: Pod "downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74917ms May 13 22:37:03.551: INFO: Pod "downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021857132s May 13 22:37:05.556: INFO: Pod "downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026413145s STEP: Saw pod success May 13 22:37:05.556: INFO: Pod "downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521" satisfied condition "success or failure" May 13 22:37:05.558: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521 container client-container: STEP: delete the pod May 13 22:37:05.629: INFO: Waiting for pod downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521 to disappear May 13 22:37:05.639: INFO: Pod downwardapi-volume-1545835b-b160-45fe-977b-2d01056fc521 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:05.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-125" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4453,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:05.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 13 22:37:06.576: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 13 22:37:08.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:37:10.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006226, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:37:13.783: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 13 22:37:13.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6085" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.877 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":274,"skipped":4461,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:15.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:37:16.337: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:37:18.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:37:20.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725006236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:37:23.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:23.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1482" for this suite. STEP: Destroying namespace "webhook-1482-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":275,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:23.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 13 22:37:23.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 13 22:37:24.268: INFO: stderr: "" May 13 22:37:24.268: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:24.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7816" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":276,"skipped":4503,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:24.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 13 22:37:24.390: INFO: Waiting up to 5m0s for pod "downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195" in namespace "projected-3062" to be "success or failure" May 13 22:37:24.718: INFO: Pod "downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195": Phase="Pending", Reason="", readiness=false. Elapsed: 328.322037ms May 13 22:37:26.722: INFO: Pod "downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332049317s May 13 22:37:28.726: INFO: Pod "downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336085332s STEP: Saw pod success May 13 22:37:28.726: INFO: Pod "downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195" satisfied condition "success or failure" May 13 22:37:28.790: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195 container client-container: STEP: delete the pod May 13 22:37:28.848: INFO: Waiting for pod downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195 to disappear May 13 22:37:28.851: INFO: Pod downwardapi-volume-822e38f3-4c41-43bd-a19c-fc237342e195 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:37:28.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3062" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4523,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 13 22:37:28.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:37:35.065: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.068: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.071: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.074: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.090: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.093: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.095: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.098: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:35.104: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:37:40.109: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.113: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.117: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.120: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.130: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.134: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.137: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.139: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:40.145: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:37:45.109: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.114: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.117: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.120: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.129: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.132: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.135: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.138: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:45.144: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:37:50.169: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.171: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.312: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.316: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.324: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.327: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.329: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.332: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:50.336: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:37:55.132: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.136: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.139: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.143: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.153: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.156: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.159: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.161: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:37:55.167: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:38:00.109: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.113: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.115: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.118: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.125: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.127: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.129: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.132: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local from pod dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc: the server could not find the requested resource (get pods dns-test-041a8013-579f-4832-8098-c4be711a18cc) May 13 22:38:00.137: INFO: Lookups using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9558.svc.cluster.local jessie_udp@dns-test-service-2.dns-9558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9558.svc.cluster.local] May 13 22:38:05.171: INFO: DNS probes using dns-9558/dns-test-041a8013-579f-4832-8098-c4be711a18cc succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 13 22:38:05.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9558" for this suite. • [SLOW TEST:36.678 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":278,"skipped":4538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSMay 13 22:38:05.557: INFO: Running AfterSuite actions on all nodes May 13 22:38:05.557: INFO: Running AfterSuite actions on node 1 May 13 22:38:05.557: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 5247.765 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS